Feb 16 22:45:58 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 22:45:58 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:58 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 22:45:59 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 22:46:00 crc kubenswrapper[4865]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.151952 4865 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161506 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161544 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161557 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161567 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161580 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161591 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161601 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161612 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161621 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161633 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161643 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161653 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161662 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161672 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161683 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161694 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161704 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161713 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161723 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161733 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161744 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161754 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161764 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161775 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161785 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161795 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161806 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161816 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161826 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161842 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161872 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161883 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161895 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161904 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161918 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161933 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161946 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161956 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161967 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161978 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.161988 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162000 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162009 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162019 4865 feature_gate.go:330] unrecognized feature gate: Example Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162028 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162038 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162048 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162058 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162067 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162077 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162088 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162103 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162118 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162131 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162142 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162152 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162164 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162175 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162185 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162194 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162210 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162224 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162234 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162247 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162258 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162268 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162310 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162323 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162334 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162344 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.162355 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164201 4865 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164238 4865 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164264 4865 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164311 4865 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164328 4865 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164341 4865 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164357 4865 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164373 4865 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164385 4865 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164397 4865 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164410 4865 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164426 4865 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164439 4865 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164451 4865 flags.go:64] FLAG: --cgroup-root="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164463 4865 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164476 4865 flags.go:64] FLAG: --client-ca-file="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164488 4865 flags.go:64] FLAG: --cloud-config="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164499 4865 flags.go:64] FLAG: --cloud-provider="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164511 4865 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164526 4865 flags.go:64] FLAG: --cluster-domain="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164538 4865 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164551 4865 flags.go:64] FLAG: --config-dir="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164563 4865 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164576 4865 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164591 4865 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164605 4865 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164617 4865 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164630 4865 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164642 4865 flags.go:64] FLAG: --contention-profiling="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164654 4865 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164665 4865 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164680 4865 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164693 4865 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164708 4865 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164720 4865 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164732 4865 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164744 4865 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164757 4865 flags.go:64] FLAG: --enable-server="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164769 4865 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164788 4865 flags.go:64] FLAG: --event-burst="100" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164800 4865 flags.go:64] FLAG: --event-qps="50" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164812 4865 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164825 4865 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164837 4865 flags.go:64] FLAG: --eviction-hard="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164851 4865 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164863 4865 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164875 4865 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164889 4865 flags.go:64] FLAG: --eviction-soft="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164902 4865 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164917 4865 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164930 4865 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164942 4865 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164953 4865 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164965 4865 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.164977 4865 flags.go:64] FLAG: --feature-gates="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165008 4865 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165020 4865 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165034 4865 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165047 4865 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165087 4865 flags.go:64] FLAG: --healthz-port="10248" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165100 4865 flags.go:64] FLAG: --help="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165112 4865 flags.go:64] FLAG: --hostname-override="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165124 4865 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165138 4865 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165151 4865 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165162 4865 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165174 4865 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165186 4865 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165198 4865 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165210 4865 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165222 4865 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165234 4865 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165247 4865 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165258 4865 flags.go:64] FLAG: --kube-reserved="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165271 4865 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165314 4865 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165329 4865 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165341 4865 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165353 4865 flags.go:64] FLAG: --lock-file="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165364 4865 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165378 4865 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165390 4865 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165410 4865 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165423 4865 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165436 4865 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165449 4865 flags.go:64] FLAG: --logging-format="text" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165460 4865 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165473 4865 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165485 4865 flags.go:64] FLAG: --manifest-url="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165498 4865 flags.go:64] FLAG: --manifest-url-header="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165514 4865 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165527 4865 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165542 4865 flags.go:64] FLAG: --max-pods="110" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165554 4865 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165566 4865 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165578 4865 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165590 4865 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165602 4865 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165614 4865 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165627 4865 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165654 4865 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165668 4865 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165680 4865 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165692 4865 flags.go:64] FLAG: --pod-cidr="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165703 4865 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165722 4865 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165733 4865 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165746 4865 flags.go:64] FLAG: --pods-per-core="0" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165757 4865 flags.go:64] FLAG: --port="10250" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165769 4865 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165781 4865 flags.go:64] FLAG: --provider-id="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165793 4865 flags.go:64] FLAG: --qos-reserved="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165805 4865 flags.go:64] FLAG: --read-only-port="10255" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165816 4865 flags.go:64] FLAG: --register-node="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165828 4865 flags.go:64] FLAG: --register-schedulable="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165840 4865 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165861 4865 flags.go:64] FLAG: --registry-burst="10" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165873 4865 flags.go:64] FLAG: --registry-qps="5" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165885 4865 flags.go:64] FLAG: --reserved-cpus="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165898 4865 flags.go:64] FLAG: --reserved-memory="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165913 4865 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165926 4865 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165938 4865 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165950 4865 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165961 4865 flags.go:64] FLAG: --runonce="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165973 4865 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165986 4865 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.165998 4865 flags.go:64] FLAG: --seccomp-default="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166011 4865 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166023 4865 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166035 4865 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166047 4865 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166058 4865 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166070 4865 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166081 4865 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166093 4865 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166105 4865 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166118 4865 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166129 4865 flags.go:64] FLAG: --system-cgroups="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166141 4865 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166161 4865 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166172 4865 flags.go:64] FLAG: --tls-cert-file="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166185 4865 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166199 4865 flags.go:64] FLAG: --tls-min-version="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166210 4865 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166222 4865 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166234 4865 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166248 4865 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166260 4865 flags.go:64] FLAG: --v="2" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166309 4865 flags.go:64] FLAG: --version="false" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166338 4865 flags.go:64] FLAG: --vmodule="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166352 4865 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.166365 4865 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166651 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166668 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166681 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166693 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166705 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166718 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166729 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166742 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166753 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166764 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166776 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166786 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166797 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166808 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166818 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166828 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166838 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166848 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166859 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166869 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166883 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166897 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166909 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166919 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166930 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166944 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166957 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166968 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166978 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166988 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.166998 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167008 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167018 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167028 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167038 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167048 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167059 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167070 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167084 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167098 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167110 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167123 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167136 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167150 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167161 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167172 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167183 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167193 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167203 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167213 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167223 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167233 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167244 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167254 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167265 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167275 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167331 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167342 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167352 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167363 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167374 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167384 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167394 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167405 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167415 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167425 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167435 4865 feature_gate.go:330] unrecognized feature gate: Example Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167447 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167457 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167467 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.167477 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.167510 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.182378 4865 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.182453 4865 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182648 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182673 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182684 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182697 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182708 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182717 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182727 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182736 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182746 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182756 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182765 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182774 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182782 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182791 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182799 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182808 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182816 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182825 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182833 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182842 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182851 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182860 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182870 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182879 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182892 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182906 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182916 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182925 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182934 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182945 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182953 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182963 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182971 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.182980 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183003 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183012 4865 feature_gate.go:330] unrecognized feature gate: Example Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183020 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183028 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183037 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183045 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183053 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183061 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183070 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183078 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183087 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183096 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183104 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183113 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183121 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183129 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183138 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183147 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183159 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183169 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183177 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183187 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183195 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183205 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183216 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183237 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183255 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183266 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183314 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183326 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183337 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183345 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183358 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183368 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183378 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183386 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183418 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.183434 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183798 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183824 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183840 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183854 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183866 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183877 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183889 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183900 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183915 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183949 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183961 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183972 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183984 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.183997 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184008 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184019 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184033 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184045 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184056 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184067 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184077 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184088 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184098 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184108 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184120 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184131 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184141 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184151 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184161 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184172 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184183 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184194 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184204 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184215 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184245 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184257 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184272 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184322 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184332 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184340 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184349 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184358 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184367 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184378 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184392 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184402 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184412 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184420 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184429 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184437 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184446 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184455 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184463 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184471 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184480 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184488 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184497 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184505 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184513 4865 feature_gate.go:330] unrecognized feature gate: Example Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184521 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184530 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184538 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184546 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184555 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184563 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184571 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184580 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184588 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184597 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184605 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.184627 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.184642 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.185160 4865 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.191947 4865 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.192097 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.194232 4865 server.go:997] "Starting client certificate rotation" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.194306 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.195682 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 16:43:44.652554508 +0000 UTC Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.195881 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.221264 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.225702 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.226445 4865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.247334 4865 log.go:25] "Validated CRI v1 runtime API" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.287763 4865 log.go:25] "Validated CRI v1 image API" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.289984 4865 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.295699 4865 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-22-41-54-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.295746 4865 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.325927 4865 manager.go:217] Machine: {Timestamp:2026-02-16 22:46:00.321880415 +0000 UTC m=+0.645587466 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e2a2196d-095f-444c-b467-b5377cee59c0 BootID:8882e2e1-623c-454f-a6ef-195b25a9cb95 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:03:f0:70 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:03:f0:70 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:20:b2:5a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:44:98:53 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:80:81:54 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:47:5e:5f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:39:0c:c5:42:35 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:3c:db:a0:7f:85 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.326498 4865 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.326888 4865 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.327416 4865 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.327589 4865 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.327625 4865 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.327798 4865 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.327807 4865 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.328369 4865 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.328387 4865 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.328639 4865 state_mem.go:36] "Initialized new in-memory state store" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.329170 4865 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.334176 4865 kubelet.go:418] "Attempting to sync node with API server" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.334199 4865 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.334252 4865 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.334266 4865 kubelet.go:324] "Adding apiserver pod source" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.334300 4865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.338386 4865 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.339709 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.340516 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.340585 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.340577 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.340681 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.343756 4865 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345474 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345519 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345534 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345551 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345580 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345597 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345616 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345643 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345663 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345681 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345704 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.345722 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.347346 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.348210 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.351483 4865 server.go:1280] "Started kubelet" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.352441 4865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.352735 4865 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.353436 4865 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 22:46:00 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.355905 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.355964 4865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.356014 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:06:36.944415411 +0000 UTC Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.356409 4865 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.356437 4865 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.356444 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.356526 4865 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.358219 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="200ms" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.358246 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.358512 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.358628 4865 factory.go:55] Registering systemd factory Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.358657 4865 factory.go:221] Registration of the systemd container factory successfully Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.359126 4865 factory.go:153] Registering CRI-O factory Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.359168 4865 factory.go:221] Registration of the crio container factory successfully Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.359324 4865 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.359397 4865 factory.go:103] Registering Raw factory Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.359440 4865 manager.go:1196] Started watching for new ooms in manager Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.361710 4865 manager.go:319] Starting recovery of all containers Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.360534 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894db8eebcd987f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:46:00.351406207 +0000 UTC m=+0.675113208,LastTimestamp:2026-02-16 22:46:00.351406207 +0000 UTC m=+0.675113208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.362323 4865 server.go:460] "Adding debug handlers to kubelet server" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381406 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381781 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381831 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381867 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381899 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381940 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381963 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.381984 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382018 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382044 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382075 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382100 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382138 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382167 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.382199 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388473 4865 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388577 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388603 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388620 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388635 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388650 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388663 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388678 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388692 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388706 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388720 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388733 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388759 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388779 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388798 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388815 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388830 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388843 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388858 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388874 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388887 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388900 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388916 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388931 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388944 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.388994 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389015 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389029 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389043 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389061 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389073 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389088 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389101 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389115 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389140 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389155 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389168 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389184 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389246 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389273 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389308 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389322 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389344 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389357 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389373 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389388 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389409 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389423 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389441 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389454 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389467 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389479 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389492 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389509 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389521 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389534 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389547 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389564 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389582 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389601 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389619 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389630 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389644 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389657 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389673 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389686 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389699 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389711 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389724 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389736 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389748 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389764 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389782 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389795 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389807 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389818 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389834 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389847 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389860 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389872 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389893 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389905 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389929 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389947 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389968 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389987 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.389999 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390019 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390032 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390071 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390100 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390115 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390132 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390150 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390169 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390183 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390197 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390212 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390227 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390242 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390256 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390269 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390301 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390317 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390363 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390382 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390396 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390410 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390424 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390438 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390452 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390465 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390480 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390494 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390509 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390522 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390535 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390549 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390564 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390580 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390599 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390616 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390630 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390644 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390659 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390673 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390687 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390700 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390715 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390733 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390762 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390788 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390818 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390851 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390886 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390913 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390945 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390965 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.390985 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391007 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391028 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391048 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391067 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391091 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391115 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391136 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391160 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391180 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391204 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391230 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391255 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391316 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391339 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391358 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391378 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391398 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391456 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391478 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391497 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391526 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391547 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391575 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391599 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391619 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391638 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391656 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391685 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391710 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391731 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391751 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391771 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391792 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391813 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391834 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391853 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391872 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391893 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391913 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391935 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391956 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391975 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.391995 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392013 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392034 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392055 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392074 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392094 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392113 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392139 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392158 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392177 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392196 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392216 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392236 4865 reconstruct.go:97] "Volume reconstruction finished" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.392253 4865 reconciler.go:26] "Reconciler: start to sync state" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.401221 4865 manager.go:324] Recovery completed Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.409586 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.411573 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.413083 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.413126 4865 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.413152 4865 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.413203 4865 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 22:46:00 crc kubenswrapper[4865]: W0216 22:46:00.415714 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.415825 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.417539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.417592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.417614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.421456 4865 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.421626 4865 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.421825 4865 state_mem.go:36] "Initialized new in-memory state store" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.439197 4865 policy_none.go:49] "None policy: Start" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.440334 4865 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.440384 4865 state_mem.go:35] "Initializing new in-memory state store" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.457522 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.513358 4865 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.515690 4865 manager.go:334] "Starting Device Plugin manager" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.515762 4865 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.515785 4865 server.go:79] "Starting device plugin registration server" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.516505 4865 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.516556 4865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.516859 4865 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.517193 4865 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.517390 4865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.527921 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.560021 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="400ms" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.617029 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.618974 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.619037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.619055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.619096 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.619717 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.714367 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.714531 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716404 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716906 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.716992 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.717871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.717959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718390 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.718581 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.719729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.719806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.719875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.720812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.720886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.720908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.721184 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.721758 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.721883 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.722806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.722852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.722864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.723029 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.723364 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.723425 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724839 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.724876 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.725094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.730987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.731084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.732938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.733004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.733032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799584 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799699 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799763 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799809 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799853 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.799997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800372 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800471 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.800560 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.820922 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.822685 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.822846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.822962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.823094 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.823767 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.902402 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.902619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.902817 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.902962 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903191 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903424 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903874 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904031 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904760 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904274 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903492 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903946 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.903446 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904108 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.904993 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905375 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905446 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905568 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: I0216 22:46:00.905770 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 22:46:00 crc kubenswrapper[4865]: E0216 22:46:00.960944 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="800ms" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.062142 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.072299 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.095564 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.118976 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-46a5e2e34efe4bed1e8eef03d8ff90494a1547628027e1065441b319a20e9a94 WatchSource:0}: Error finding container 46a5e2e34efe4bed1e8eef03d8ff90494a1547628027e1065441b319a20e9a94: Status 404 returned error can't find the container with id 46a5e2e34efe4bed1e8eef03d8ff90494a1547628027e1065441b319a20e9a94 Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.119929 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.120385 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e11205b542c2cefe4e8db2564dbebd5d8fbb402e92ce51b90ce9b67bbcd6573a WatchSource:0}: Error finding container e11205b542c2cefe4e8db2564dbebd5d8fbb402e92ce51b90ce9b67bbcd6573a: Status 404 returned error can't find the container with id e11205b542c2cefe4e8db2564dbebd5d8fbb402e92ce51b90ce9b67bbcd6573a Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.130952 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b0df0a984955c30ea96960a6dce3f719b940f61f4717ee726a62799bc1f9be0e WatchSource:0}: Error finding container b0df0a984955c30ea96960a6dce3f719b940f61f4717ee726a62799bc1f9be0e: Status 404 returned error can't find the container with id b0df0a984955c30ea96960a6dce3f719b940f61f4717ee726a62799bc1f9be0e Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.132262 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.149834 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f6b88bf380aef433979bb7d89b512da92db9b8d70f67f66ab4643b89c62d9c1c WatchSource:0}: Error finding container f6b88bf380aef433979bb7d89b512da92db9b8d70f67f66ab4643b89c62d9c1c: Status 404 returned error can't find the container with id f6b88bf380aef433979bb7d89b512da92db9b8d70f67f66ab4643b89c62d9c1c Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.167532 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-34a87179d7b7881922b9b022f450116ecca0caa2f3e407e666c8c39158a0f7eb WatchSource:0}: Error finding container 34a87179d7b7881922b9b022f450116ecca0caa2f3e407e666c8c39158a0f7eb: Status 404 returned error can't find the container with id 34a87179d7b7881922b9b022f450116ecca0caa2f3e407e666c8c39158a0f7eb Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.224439 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.226914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.226984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.227003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.227047 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.227772 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.349332 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.356226 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:21:15.764370314 +0000 UTC Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.425085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e11205b542c2cefe4e8db2564dbebd5d8fbb402e92ce51b90ce9b67bbcd6573a"} Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.426660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"46a5e2e34efe4bed1e8eef03d8ff90494a1547628027e1065441b319a20e9a94"} Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.428419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"34a87179d7b7881922b9b022f450116ecca0caa2f3e407e666c8c39158a0f7eb"} Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.430338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6b88bf380aef433979bb7d89b512da92db9b8d70f67f66ab4643b89c62d9c1c"} Feb 16 22:46:01 crc kubenswrapper[4865]: I0216 22:46:01.431896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0df0a984955c30ea96960a6dce3f719b940f61f4717ee726a62799bc1f9be0e"} Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.552225 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.552470 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.613008 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.613170 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.762513 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="1.6s" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.885379 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.885507 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:01 crc kubenswrapper[4865]: W0216 22:46:01.890449 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:01 crc kubenswrapper[4865]: E0216 22:46:01.890532 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.027889 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.030386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.030457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.030477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.030520 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:02 crc kubenswrapper[4865]: E0216 22:46:02.031141 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.349752 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.357073 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:35:00.286209265 +0000 UTC Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.388461 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 22:46:02 crc kubenswrapper[4865]: E0216 22:46:02.390082 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.439058 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34" exitCode=0 Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.439183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.439268 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.440875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.440923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.440940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.442979 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f" exitCode=0 Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.443071 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.443120 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.444370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.444427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.444453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.446620 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.446681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.450155 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350" exitCode=0 Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.450269 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.450397 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.452032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.452096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.452121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.456079 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6793394dcea221e7cccfb13c16150d8d84e23f7b50e4e7cefa2fe52e867aca0b" exitCode=0 Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.456137 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6793394dcea221e7cccfb13c16150d8d84e23f7b50e4e7cefa2fe52e867aca0b"} Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.456310 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.457959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.458017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.458036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.459036 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.460889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.460941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:02 crc kubenswrapper[4865]: I0216 22:46:02.460958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: E0216 22:46:03.185889 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894db8eebcd987f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:46:00.351406207 +0000 UTC m=+0.675113208,LastTimestamp:2026-02-16 22:46:00.351406207 +0000 UTC m=+0.675113208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.349732 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.357922 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:55:58.524096595 +0000 UTC Feb 16 22:46:03 crc kubenswrapper[4865]: W0216 22:46:03.362512 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:03 crc kubenswrapper[4865]: E0216 22:46:03.362587 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:03 crc kubenswrapper[4865]: E0216 22:46:03.362819 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="3.2s" Feb 16 22:46:03 crc kubenswrapper[4865]: W0216 22:46:03.386578 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.53:6443: connect: connection refused Feb 16 22:46:03 crc kubenswrapper[4865]: E0216 22:46:03.386651 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.53:6443: connect: connection refused" logger="UnhandledError" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.467535 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.467583 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.467593 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.467604 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.468728 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="98aaf5332e1705f701b10355838c5aa257308ddbddd3cce0ded8d8f9dc39f2f5" exitCode=0 Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.468768 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"98aaf5332e1705f701b10355838c5aa257308ddbddd3cce0ded8d8f9dc39f2f5"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.468871 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.469689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.469712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.469719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.472249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c16206ebdb465c684bdb96390345673597700b4166d0a59289dd67034aea952b"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.472311 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.473361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.473386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.473397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.475984 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.476013 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.476026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.476395 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.477422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.477456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.477467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.480587 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.480634 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2"} Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.480680 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.481588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.481628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.481640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.632342 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.633697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.633751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.634216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:03 crc kubenswrapper[4865]: I0216 22:46:03.634270 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:03 crc kubenswrapper[4865]: E0216 22:46:03.634881 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.53:6443: connect: connection refused" node="crc" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.358158 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:56:45.58518322 +0000 UTC Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.490889 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29"} Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.491039 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.492275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.492343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.492360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495505 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1b430fa6cb64d35dc332dcf03f7e1318262168d70dd91222558433934c2b509c" exitCode=0 Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495623 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495673 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495711 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495775 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.495701 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1b430fa6cb64d35dc332dcf03f7e1318262168d70dd91222558433934c2b509c"} Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.496235 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.498913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.498956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.497765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.499520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.499555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:04 crc kubenswrapper[4865]: I0216 22:46:04.953517 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.359106 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:40:04.314075547 +0000 UTC Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504079 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"afa4031a038b9191a1af1b2cc5e4a51bcab1a6e1f70798430bbe376fee33c978"} Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504146 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab4b2349f8a4b72e9e725382e6ce2cf9b93cb93371720ba5414fcd73b72341de"} Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad68fe303b9a7cd9a173de2ccc816d892fa8c73978c130e6a1cd3d20aa22e528"} Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504241 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504333 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.504347 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.505981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.506001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.506033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.506042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.506053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.506062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.686928 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:05 crc kubenswrapper[4865]: I0216 22:46:05.953418 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.054611 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.054810 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.056533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.056608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.056628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.359660 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:21:12.692333272 +0000 UTC Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.513726 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.513719 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b068f989f925e48014564c0dfaba6f0601005c96870ce62e4bae58443f89f8a6"} Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.513871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b495dc2ac9333d830c62028fb949d15bb449bcf006b16e0e5e78428ba57b45b"} Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.513742 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.515628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.618834 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.835442 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.836965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.837032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.837044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:06 crc kubenswrapper[4865]: I0216 22:46:06.837080 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.359937 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:54:02.767592351 +0000 UTC Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.517063 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.517083 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.518762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.518807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.518823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.518945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.519012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:07 crc kubenswrapper[4865]: I0216 22:46:07.519035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:08 crc kubenswrapper[4865]: I0216 22:46:08.361407 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:51:25.388366 +0000 UTC Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.026602 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.026937 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.032716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.032799 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.032823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.362405 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:49:23.134449182 +0000 UTC Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.652461 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.652771 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.654456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.654529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.654569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.857399 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.857653 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.859641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.859721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:09 crc kubenswrapper[4865]: I0216 22:46:09.859745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.357610 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.362847 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:34:59.424648858 +0000 UTC Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.524806 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.526373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.526433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:10 crc kubenswrapper[4865]: I0216 22:46:10.526460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:10 crc kubenswrapper[4865]: E0216 22:46:10.528230 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.363934 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:16:23.67757129 +0000 UTC Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.931744 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.931889 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.933069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.933099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.933107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:11 crc kubenswrapper[4865]: I0216 22:46:11.937788 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.364623 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:14:48.898342332 +0000 UTC Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.531825 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.533347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.533422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.533445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.539112 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.653159 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 22:46:12 crc kubenswrapper[4865]: I0216 22:46:12.653314 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 22:46:13 crc kubenswrapper[4865]: I0216 22:46:13.364989 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:46:01.356429742 +0000 UTC Feb 16 22:46:13 crc kubenswrapper[4865]: I0216 22:46:13.535091 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:13 crc kubenswrapper[4865]: I0216 22:46:13.536554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:13 crc kubenswrapper[4865]: I0216 22:46:13.536603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:13 crc kubenswrapper[4865]: I0216 22:46:13.536627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.237732 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.238041 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.351014 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.365178 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:46:22.056631137 +0000 UTC Feb 16 22:46:14 crc kubenswrapper[4865]: W0216 22:46:14.462913 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.463051 4865 trace.go:236] Trace[510151462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 22:46:04.461) (total time: 10002ms): Feb 16 22:46:14 crc kubenswrapper[4865]: Trace[510151462]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:46:14.462) Feb 16 22:46:14 crc kubenswrapper[4865]: Trace[510151462]: [10.002001933s] [10.002001933s] END Feb 16 22:46:14 crc kubenswrapper[4865]: E0216 22:46:14.463088 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.540650 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.543618 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29" exitCode=255 Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.543663 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29"} Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.543828 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.544754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.544819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.544846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.545843 4865 scope.go:117] "RemoveContainer" containerID="767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.654982 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.655070 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.663095 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 16 22:46:14 crc kubenswrapper[4865]: I0216 22:46:14.663175 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.366203 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:16:43.722990728 +0000 UTC Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.548970 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.551223 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929"} Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.551484 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.552555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.552628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.552654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:15 crc kubenswrapper[4865]: I0216 22:46:15.958565 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.367432 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:05:39.591821606 +0000 UTC Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.554636 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.554823 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.556487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.556546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.556565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:16 crc kubenswrapper[4865]: I0216 22:46:16.561887 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:17 crc kubenswrapper[4865]: I0216 22:46:17.367766 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:18:13.447207899 +0000 UTC Feb 16 22:46:17 crc kubenswrapper[4865]: I0216 22:46:17.557184 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:17 crc kubenswrapper[4865]: I0216 22:46:17.558131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:17 crc kubenswrapper[4865]: I0216 22:46:17.558194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:17 crc kubenswrapper[4865]: I0216 22:46:17.558212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:18 crc kubenswrapper[4865]: I0216 22:46:18.368610 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:39:38.920400184 +0000 UTC Feb 16 22:46:18 crc kubenswrapper[4865]: I0216 22:46:18.559255 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:18 crc kubenswrapper[4865]: I0216 22:46:18.560168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:18 crc kubenswrapper[4865]: I0216 22:46:18.560203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:18 crc kubenswrapper[4865]: I0216 22:46:18.560216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.064189 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.064488 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.066318 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.066406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.066458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.086446 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.369819 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:09:38.830080314 +0000 UTC Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.562410 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.563368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.563432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.563445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.618127 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:19 crc kubenswrapper[4865]: E0216 22:46:19.630750 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.633007 4865 trace.go:236] Trace[1267812038]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 22:46:07.854) (total time: 11778ms): Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[1267812038]: ---"Objects listed" error: 11778ms (22:46:19.632) Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[1267812038]: [11.778791563s] [11.778791563s] END Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.633032 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.633256 4865 trace.go:236] Trace[177862787]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 22:46:08.445) (total time: 11187ms): Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[177862787]: ---"Objects listed" error: 11187ms (22:46:19.633) Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[177862787]: [11.187255235s] [11.187255235s] END Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.633301 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:19 crc kubenswrapper[4865]: E0216 22:46:19.634723 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.636601 4865 trace.go:236] Trace[1420849375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 22:46:04.715) (total time: 14921ms): Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[1420849375]: ---"Objects listed" error: 14920ms (22:46:19.636) Feb 16 22:46:19 crc kubenswrapper[4865]: Trace[1420849375]: [14.921014652s] [14.921014652s] END Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.636628 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.639109 4865 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.652103 4865 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.680632 4865 csr.go:261] certificate signing request csr-vrzp5 is approved, waiting to be issued Feb 16 22:46:19 crc kubenswrapper[4865]: I0216 22:46:19.688877 4865 csr.go:257] certificate signing request csr-vrzp5 is issued Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.193953 4865 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 22:46:20 crc kubenswrapper[4865]: W0216 22:46:20.194331 4865 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 22:46:20 crc kubenswrapper[4865]: W0216 22:46:20.194381 4865 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 22:46:20 crc kubenswrapper[4865]: W0216 22:46:20.194400 4865 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 22:46:20 crc kubenswrapper[4865]: W0216 22:46:20.194411 4865 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.194568 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": read tcp 38.102.83.53:54580->38.102.83.53:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894db8f4297f538 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:46:01.807508792 +0000 UTC m=+2.131215793,LastTimestamp:2026-02-16 22:46:01.807508792 +0000 UTC m=+2.131215793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.343486 4865 apiserver.go:52] "Watching apiserver" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.349411 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.349728 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-rhf5k","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350167 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350342 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350392 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.350444 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350561 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.350622 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350565 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350729 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.350733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.350846 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.352068 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.352256 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.352296 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.352626 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.352846 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.354864 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.355028 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.355328 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.355428 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.355968 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.356128 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.356327 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.364492 4865 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.370254 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:02:46.715549437 +0000 UTC Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.373884 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.393684 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.404754 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.414324 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.424233 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.434116 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442555 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442600 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442809 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442845 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442869 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442887 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442945 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442966 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.442985 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443003 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443020 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443039 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443062 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443080 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443101 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443120 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443141 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443158 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443177 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443197 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443214 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443235 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443252 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443307 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443345 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443365 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443405 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443422 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443496 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443531 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443549 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443579 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443598 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443699 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443720 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443737 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443798 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443817 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443839 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443854 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443871 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443886 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443903 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443920 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443936 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443951 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443967 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444000 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444035 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444056 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444073 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444091 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444109 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444125 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444141 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444199 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444217 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444232 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444264 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444294 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443429 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443549 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443611 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443648 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443798 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444395 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444441 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443792 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444462 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.443916 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444515 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444027 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444035 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444305 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.444619 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:20.944600486 +0000 UTC m=+21.268307447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444314 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444701 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444705 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444736 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444778 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444848 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444875 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444894 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444900 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444931 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444945 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.444979 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445003 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445027 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445051 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445063 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445079 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445106 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445123 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445138 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445134 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445149 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445191 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445213 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445253 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445329 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445340 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445347 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445381 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445389 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445429 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445446 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445453 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445499 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445521 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445543 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445564 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445580 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445599 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445613 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445630 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445661 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445712 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445727 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445742 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445760 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445796 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445814 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445830 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445870 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445885 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445901 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445916 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445932 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445965 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446017 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446033 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446049 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446085 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446120 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446135 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446150 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446166 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446185 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446200 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446218 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446254 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446304 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446319 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446338 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446354 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446370 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446386 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446404 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446421 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446440 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446456 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446474 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446492 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446508 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446524 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446540 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446556 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446572 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446590 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446606 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446621 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446636 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446652 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446668 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446701 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446726 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446749 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446768 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446787 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446820 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446836 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446868 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446890 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446921 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446952 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446969 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446985 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447002 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447017 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447034 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447088 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447107 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447123 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447142 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447157 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447176 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447192 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447230 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447305 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447387 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42pjp\" (UniqueName: \"kubernetes.io/projected/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-kube-api-access-42pjp\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447412 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447456 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.457422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458045 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458091 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458164 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458190 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458223 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458250 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458330 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-hosts-file\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458442 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458465 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458482 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458497 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458513 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458527 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458543 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458560 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458574 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458594 4865 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458611 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458626 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458643 4865 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458657 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458734 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458762 4865 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458774 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458785 4865 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458798 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458809 4865 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458822 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458838 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458852 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458866 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458881 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458896 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458911 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458928 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458943 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445520 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445531 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445556 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445683 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445792 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445912 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.445944 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446025 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446076 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446156 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446229 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446318 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446424 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446665 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.446700 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447251 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447343 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447479 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.447892 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.448068 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.448371 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.448688 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.448797 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.449500 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.449587 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.449627 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.450091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.450890 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.451506 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.451677 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.451692 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.451695 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.452452 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.454398 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.454438 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.454781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.455510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.460764 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.456291 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458090 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458628 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458908 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458998 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.458743 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.459217 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.459468 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.459470 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.459673 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.459764 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.459924 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.459940 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.460701 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.461107 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:20.961084176 +0000 UTC m=+21.284791147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.461268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.461907 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:20.961885209 +0000 UTC m=+21.285592170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462172 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462444 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462491 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462512 4865 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462507 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.462615 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463056 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463164 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463602 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463751 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.463923 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464228 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464393 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464513 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464734 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464979 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.464996 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.465525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.465537 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.465767 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.466411 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.466701 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.467045 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.467308 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.467551 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468061 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468298 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468410 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468661 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468702 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468901 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.468981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.469131 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.469484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.469354 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.469530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.469896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470302 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470478 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470779 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470837 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.470945 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.471425 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.471604 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.471698 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.471946 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472075 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472208 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472220 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472369 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472760 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.472863 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473040 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473126 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473294 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473446 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473561 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473598 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.473637 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.474094 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.474198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.474566 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.474606 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.474626 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.474700 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:20.974675523 +0000 UTC m=+21.298382494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.475215 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.475239 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.475255 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.475353 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:20.975338192 +0000 UTC m=+21.299045153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.478758 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.478914 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.479304 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.479700 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.480178 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.484341 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.485631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.485801 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.487545 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.489071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.489495 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.489631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.489961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.490768 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.490965 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.511980 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512026 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.511975 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512210 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512300 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512340 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512426 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512697 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512780 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512975 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512983 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513024 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513056 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513064 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513360 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513387 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513413 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513432 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513629 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.513766 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.514052 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512560 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.512603 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.515604 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.516710 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.516747 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.516887 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.516911 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.517092 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.517110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.517119 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.518383 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.523968 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.531437 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.536587 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.538507 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.540594 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.544369 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.550448 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.558763 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-hosts-file\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559569 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42pjp\" (UniqueName: \"kubernetes.io/projected/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-kube-api-access-42pjp\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559754 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559768 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559780 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559790 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559800 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559810 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559822 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559835 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559845 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559857 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559868 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559881 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559892 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559903 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559912 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559922 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559933 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559944 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559956 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559969 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559980 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559990 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560000 4865 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560010 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.559934 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560019 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560079 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560097 4865 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560115 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560131 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560145 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560159 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560172 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560186 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560199 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560214 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560228 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560241 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560254 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560270 4865 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560301 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560314 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560330 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560344 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560358 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560370 4865 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560384 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560399 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560413 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560427 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560441 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560456 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560469 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560482 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560496 4865 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560509 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560525 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560538 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560552 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560565 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560577 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560590 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560603 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560615 4865 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560628 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560640 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560653 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560666 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560678 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560694 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560706 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560721 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560734 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560746 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560759 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560772 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560784 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560796 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560809 4865 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560821 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560834 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560847 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560863 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560876 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560888 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560900 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560915 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560928 4865 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560941 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560954 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560968 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560981 4865 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.560994 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561006 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561018 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561031 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561043 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561054 4865 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561067 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561079 4865 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561090 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561102 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561116 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561128 4865 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561139 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561152 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561171 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561184 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561196 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561209 4865 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561222 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561234 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561246 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561258 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561271 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561300 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561313 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561326 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561340 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561352 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561366 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561380 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561393 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561406 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561401 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-hosts-file\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561420 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561852 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561871 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561881 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561933 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561943 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561953 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561964 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.561992 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564344 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564362 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564404 4865 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.563082 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564421 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564461 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564474 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564487 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564498 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564507 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564517 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564527 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564537 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564547 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564556 4865 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564567 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564577 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564589 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564600 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564612 4865 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564621 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564631 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564642 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564653 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564663 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564671 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564681 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564689 4865 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564698 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564706 4865 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564715 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564724 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564734 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.564743 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.570316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.589543 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42pjp\" (UniqueName: \"kubernetes.io/projected/ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a-kube-api-access-42pjp\") pod \"node-resolver-rhf5k\" (UID: \"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\") " pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.659785 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7sl6f"] Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.660421 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.662220 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.662291 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.662867 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.663265 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.663792 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.669207 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.671744 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.673482 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.680697 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rhf5k" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.689462 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.689917 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 22:41:19 +0000 UTC, rotation deadline is 2026-12-14 17:45:37.279558424 +0000 UTC Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.689984 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7218h59m16.58957891s for next certificate rotation Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.694662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.697636 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.702557 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.712915 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.715312 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.730624 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: W0216 22:46:20.732355 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0cf421e6a73dd25b2942d8756d261a404051101e555f616554ae9b6d97205687 WatchSource:0}: Error finding container 0cf421e6a73dd25b2942d8756d261a404051101e555f616554ae9b6d97205687: Status 404 returned error can't find the container with id 0cf421e6a73dd25b2942d8756d261a404051101e555f616554ae9b6d97205687 Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.739143 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.749220 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.758015 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.766462 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-proxy-tls\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.766529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-rootfs\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.766571 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.766603 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszz8\" (UniqueName: \"kubernetes.io/projected/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-kube-api-access-nszz8\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.771573 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.786338 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.808686 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.819465 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.849657 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.867369 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.868185 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-rootfs\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.868242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.868293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszz8\" (UniqueName: \"kubernetes.io/projected/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-kube-api-access-nszz8\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.868318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-proxy-tls\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.869705 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-rootfs\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.870154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-mcd-auth-proxy-config\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.877777 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-proxy-tls\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.891564 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.911759 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.918674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszz8\" (UniqueName: \"kubernetes.io/projected/af5ee041-5763-4a28-9d12-7ba21bbb9dbc-kube-api-access-nszz8\") pod \"machine-config-daemon-7sl6f\" (UID: \"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\") " pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.942003 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.970675 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.970768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:20 crc kubenswrapper[4865]: I0216 22:46:20.970809 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.970890 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.970945 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:21.970931846 +0000 UTC m=+22.294638807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.971005 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:21.970982297 +0000 UTC m=+22.294689258 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.971014 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:20 crc kubenswrapper[4865]: E0216 22:46:20.971068 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:21.971060819 +0000 UTC m=+22.294767780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.003662 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.024577 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v9gjl"] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.025301 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x4rgl"] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.025514 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.026041 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tqmsq"] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.026236 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.026321 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.027701 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028008 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028101 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028212 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028555 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028754 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.028861 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.029908 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.029958 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.030199 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.030364 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.030374 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.030425 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.030531 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.040234 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.049299 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.057444 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.066447 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-hostroot\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071624 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071667 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071713 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.071772 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.071798 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.071811 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-multus\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071839 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-conf-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071862 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.071901 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.071961 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:22.071937956 +0000 UTC m=+22.395645027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-cni-binary-copy\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072039 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-netns\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-bin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072072 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-kubelet\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.072084 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.072134 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.072148 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072161 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-multus-certs\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.072187 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:22.072174953 +0000 UTC m=+22.395881914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-etc-kubernetes\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072225 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-os-release\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072253 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-socket-dir-parent\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-system-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072305 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-multus-daemon-config\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072357 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwt79\" (UniqueName: \"kubernetes.io/projected/518e6107-6873-4bd2-86a6-e422763483ec-kube-api-access-nwt79\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072373 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072396 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-k8s-cni-cncf-io\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072424 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072438 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.072459 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-cnibin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.082882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.091876 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.101049 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.110962 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.119965 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.134352 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.143575 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.152765 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.160180 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.167998 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.172858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.172921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.172951 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.172990 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173015 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173040 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173053 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-conf-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173089 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173122 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173177 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173183 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173207 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-conf-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173258 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-multus\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173303 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-system-cni-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173310 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-multus\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-cni-binary-copy\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173428 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-netns\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173505 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173537 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173596 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-bin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173664 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-netns\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173741 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-kubelet\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-multus-certs\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173870 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173907 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-cni-binary-copy\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173941 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.173981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-kubelet\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-os-release\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174029 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-etc-kubernetes\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174051 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174090 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-socket-dir-parent\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174123 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-multus-certs\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-multus-socket-dir-parent\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174180 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-etc-kubernetes\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-var-lib-cni-bin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-system-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-multus-daemon-config\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174341 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-os-release\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174383 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174419 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-system-cni-dir\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwt79\" (UniqueName: \"kubernetes.io/projected/518e6107-6873-4bd2-86a6-e422763483ec-kube-api-access-nwt79\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174506 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174603 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxp7s\" (UniqueName: \"kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174625 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-os-release\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174651 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174682 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/518e6107-6873-4bd2-86a6-e422763483ec-multus-daemon-config\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174724 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-cnibin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174814 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-k8s-cni-cncf-io\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174830 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174861 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mg6\" (UniqueName: \"kubernetes.io/projected/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-kube-api-access-t9mg6\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174882 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-hostroot\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174907 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174925 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174941 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cnibin\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.174982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-hostroot\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.175003 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.175011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-host-run-k8s-cni-cncf-io\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.175024 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.175043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518e6107-6873-4bd2-86a6-e422763483ec-cnibin\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.178064 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.187074 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.192827 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwt79\" (UniqueName: \"kubernetes.io/projected/518e6107-6873-4bd2-86a6-e422763483ec-kube-api-access-nwt79\") pod \"multus-tqmsq\" (UID: \"518e6107-6873-4bd2-86a6-e422763483ec\") " pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.194485 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.211241 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.225163 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.257620 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276429 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276473 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276498 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276529 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxp7s\" (UniqueName: \"kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276576 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276597 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276551 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-os-release\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276706 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9mg6\" (UniqueName: \"kubernetes.io/projected/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-kube-api-access-t9mg6\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276761 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cnibin\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276772 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-os-release\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276790 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.276824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cnibin\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277421 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277452 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-binary-copy\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277480 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277430 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277490 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277513 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-system-cni-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277658 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277706 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-system-cni-dir\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.277948 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.278055 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.293253 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.365636 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tqmsq" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.366857 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.366901 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxp7s\" (UniqueName: \"kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s\") pod \"ovnkube-node-v9gjl\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.370763 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9mg6\" (UniqueName: \"kubernetes.io/projected/fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5-kube-api-access-t9mg6\") pod \"multus-additional-cni-plugins-x4rgl\" (UID: \"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\") " pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.370845 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:24:38.689218064 +0000 UTC Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.375601 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.393211 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.569520 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"43ff963e5cfa8f670697679ccaa8dc8af97216272a749dcd71e6ab9783b736ce"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.571809 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.571861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0cf421e6a73dd25b2942d8756d261a404051101e555f616554ae9b6d97205687"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.575101 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.575140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.575158 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc56e5e02d99a59e421725904e4c6a1031d6e3e21dddf5b6a733dda2d278a3be"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.576662 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.577026 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.578295 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" exitCode=255 Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.578349 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.578403 4865 scope.go:117] "RemoveContainer" containerID="767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.582532 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerStarted","Data":"10ea8e5f0ce2c7f772ba35b7a11b0070c2303ce1976df248fe18a4460c67e17c"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.585873 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.589267 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.589555 4865 scope.go:117] "RemoveContainer" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.589575 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerStarted","Data":"4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.589609 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerStarted","Data":"a9d54ab83647990b7ae006a29a2b72b65c41b13deee71357a67add893167e46a"} Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.589744 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.591922 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.591958 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.591973 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"c03abb8cf9259b37fcdf85084c703ebef8154e703d09cf42091d2e077624fe17"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.600344 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.601727 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rhf5k" event={"ID":"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a","Type":"ContainerStarted","Data":"af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.601784 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rhf5k" event={"ID":"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a","Type":"ContainerStarted","Data":"89a7b6d5646a1fdda34182d97057e736044cea8cbe85d548f748d59182a24d3b"} Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.612628 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.622661 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.634448 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.642676 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.645177 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.673993 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.714234 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.757600 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.794466 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.835654 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.880318 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.919478 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:14Z\\\",\\\"message\\\":\\\"W0216 22:46:03.662876 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 22:46:03.663427 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771281963 cert, and key in /tmp/serving-cert-1709874990/serving-signer.crt, /tmp/serving-cert-1709874990/serving-signer.key\\\\nI0216 22:46:03.850154 1 observer_polling.go:159] Starting file observer\\\\nW0216 22:46:03.853770 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 22:46:03.853992 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:03.855614 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1709874990/tls.crt::/tmp/serving-cert-1709874990/tls.key\\\\\\\"\\\\nF0216 22:46:14.201855 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.957997 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.995248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.995402 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.995432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.995531 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.995595 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:23.995578257 +0000 UTC m=+24.319285228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.995654 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:23.995646609 +0000 UTC m=+24.319353570 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.995732 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: E0216 22:46:21.995762 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:23.995753742 +0000 UTC m=+24.319460703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:21 crc kubenswrapper[4865]: I0216 22:46:21.997461 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:21Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.040224 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.081778 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.096096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.096148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096325 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096356 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096372 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096400 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096447 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096461 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:24.096445343 +0000 UTC m=+24.420152314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096466 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.096562 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:24.096542146 +0000 UTC m=+24.420249167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.115955 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.158492 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.198674 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.244265 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.283589 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.315927 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.361167 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.371398 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:34:39.635303511 +0000 UTC Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.400783 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.414183 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.414253 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.414308 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.414357 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.414397 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.414405 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.418633 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.419340 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.420452 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.421261 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.422246 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.422812 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.423408 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.424315 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.424939 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.425813 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.426325 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.427418 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.427936 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.428432 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.429550 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.430051 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.431026 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.431475 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.432033 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.433105 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.433576 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.434494 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.435116 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.436448 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.438585 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.439209 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.440910 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.441928 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.443882 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.444610 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.445495 4865 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.445598 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.447301 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.448333 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.448823 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.450411 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.451088 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.453873 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.455768 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.456990 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.457512 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.458574 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.459209 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.460911 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.461622 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.462561 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.463461 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.464148 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.464932 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.465721 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.466179 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.466750 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.467672 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.468417 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.609525 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" exitCode=0 Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.609618 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.609695 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"6662afa376d68a2cb0a905370750c9271f44dcc2b59b807b8b697ab3a0e370c1"} Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.612520 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879" exitCode=0 Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.612612 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879"} Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.615776 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.618437 4865 scope.go:117] "RemoveContainer" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" Feb 16 22:46:22 crc kubenswrapper[4865]: E0216 22:46:22.618632 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.625986 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.643650 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.664250 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.684575 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.700489 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.720958 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767fe939b17ede08600bad99c1a6af42a5e2a0b0e90baf17523483dfd1768b29\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:14Z\\\",\\\"message\\\":\\\"W0216 22:46:03.662876 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 22:46:03.663427 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771281963 cert, and key in /tmp/serving-cert-1709874990/serving-signer.crt, /tmp/serving-cert-1709874990/serving-signer.key\\\\nI0216 22:46:03.850154 1 observer_polling.go:159] Starting file observer\\\\nW0216 22:46:03.853770 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 22:46:03.853992 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:03.855614 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1709874990/tls.crt::/tmp/serving-cert-1709874990/tls.key\\\\\\\"\\\\nF0216 22:46:14.201855 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.733334 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.743620 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.763169 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.795112 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.844924 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.876053 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.915516 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.926169 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-p2bwl"] Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.926677 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.953227 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:22Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.966758 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 22:46:22 crc kubenswrapper[4865]: I0216 22:46:22.986301 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.004697 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntqf\" (UniqueName: \"kubernetes.io/projected/3fc94743-41ce-4311-b0a7-d24aec69e9df-kube-api-access-mntqf\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.004745 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fc94743-41ce-4311-b0a7-d24aec69e9df-host\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.004807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3fc94743-41ce-4311-b0a7-d24aec69e9df-serviceca\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.006491 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.026311 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.078393 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.105543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntqf\" (UniqueName: \"kubernetes.io/projected/3fc94743-41ce-4311-b0a7-d24aec69e9df-kube-api-access-mntqf\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.105660 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fc94743-41ce-4311-b0a7-d24aec69e9df-host\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.105743 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3fc94743-41ce-4311-b0a7-d24aec69e9df-serviceca\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.105816 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fc94743-41ce-4311-b0a7-d24aec69e9df-host\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.107404 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3fc94743-41ce-4311-b0a7-d24aec69e9df-serviceca\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.118714 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.143054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntqf\" (UniqueName: \"kubernetes.io/projected/3fc94743-41ce-4311-b0a7-d24aec69e9df-kube-api-access-mntqf\") pod \"node-ca-p2bwl\" (UID: \"3fc94743-41ce-4311-b0a7-d24aec69e9df\") " pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.178792 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.214322 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.237078 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p2bwl" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.260163 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.298265 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.332259 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: W0216 22:46:23.333439 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc94743_41ce_4311_b0a7_d24aec69e9df.slice/crio-8302c359528504d80ebd20903a0efce0efbb73a5a7e3ae47dddf7074ec4a9fe0 WatchSource:0}: Error finding container 8302c359528504d80ebd20903a0efce0efbb73a5a7e3ae47dddf7074ec4a9fe0: Status 404 returned error can't find the container with id 8302c359528504d80ebd20903a0efce0efbb73a5a7e3ae47dddf7074ec4a9fe0 Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.372601 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:28:40.029905423 +0000 UTC Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.377394 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.415602 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.456165 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.507315 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.536433 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.577857 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.620979 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.624417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p2bwl" event={"ID":"3fc94743-41ce-4311-b0a7-d24aec69e9df","Type":"ContainerStarted","Data":"afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.624504 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p2bwl" event={"ID":"3fc94743-41ce-4311-b0a7-d24aec69e9df","Type":"ContainerStarted","Data":"8302c359528504d80ebd20903a0efce0efbb73a5a7e3ae47dddf7074ec4a9fe0"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.627091 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.631929 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661" exitCode=0 Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.632018 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638155 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638194 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.638205 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.657045 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.706905 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.735744 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.781519 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.821973 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.861189 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.900567 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.942238 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:23 crc kubenswrapper[4865]: I0216 22:46:23.976855 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.017266 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.017436 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.017473 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:28.017429785 +0000 UTC m=+28.341136756 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.017546 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.017590 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.017667 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:28.017644641 +0000 UTC m=+28.341351632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.017828 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.017990 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:28.01795347 +0000 UTC m=+28.341660511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.019455 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.057583 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.099220 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.118973 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.119028 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119170 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119180 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119204 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119205 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119222 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119224 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119329 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:28.119269089 +0000 UTC m=+28.442976070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.119359 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:28.119348171 +0000 UTC m=+28.443055142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.143796 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.177412 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.221884 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.237128 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.238517 4865 scope.go:117] "RemoveContainer" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.238840 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.262386 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.306113 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.336559 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.373341 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:18:31.855987685 +0000 UTC Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.383179 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.414039 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.414115 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.414039 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.414202 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.414358 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:24 crc kubenswrapper[4865]: E0216 22:46:24.414472 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.423459 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.459513 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.519713 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.541688 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.581509 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.620036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.646936 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce" exitCode=0 Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.647030 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce"} Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.666213 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.698991 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.739163 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.775118 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.818807 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.856962 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.906171 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.941648 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:24 crc kubenswrapper[4865]: I0216 22:46:24.978535 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:24Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.013921 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.066425 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.095504 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.137734 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.178867 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.217503 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.228085 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.384860 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:08:26.682773894 +0000 UTC Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.658897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.663209 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca" exitCode=0 Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.663300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca"} Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.686063 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.704908 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.720110 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.738604 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.752525 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.773023 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.787423 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.801804 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.814977 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.837617 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.853046 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.870876 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.886389 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:25 crc kubenswrapper[4865]: I0216 22:46:25.899378 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:25Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.035249 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.038572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.038640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.038657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.038853 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.045515 4865 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.045868 4865 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.047067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.047100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.047112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.047130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.047142 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.065126 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.068689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.068736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.068752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.068800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.068815 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.084830 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.089350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.089388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.089408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.089431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.089444 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.103603 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.108070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.108104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.108119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.108143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.108162 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.119778 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.123403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.123450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.123462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.123481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.123496 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.134987 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.135139 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.143828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.143878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.143893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.143922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.143936 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.247760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.247852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.247865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.247882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.247913 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.356328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.356361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.356370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.356385 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.356393 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.385738 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:46:14.623426992 +0000 UTC Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.414182 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.414232 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.414226 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.414343 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.414424 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:26 crc kubenswrapper[4865]: E0216 22:46:26.414583 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.458802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.458872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.458891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.458918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.458936 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.561560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.561598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.561607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.561622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.561632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.665707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.665975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.666077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.666205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.669695 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.675913 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017" exitCode=0 Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.675965 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.694251 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.713088 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.725145 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.745787 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.763655 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.772802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.772861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.772872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.772889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.772902 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.776489 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.794812 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.813561 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.830211 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.845020 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.861771 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.873883 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.875763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.875811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.875825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.875848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.875861 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.889437 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.903740 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:26Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.978929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.979301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.979314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.979338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:26 crc kubenswrapper[4865]: I0216 22:46:26.979354 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:26Z","lastTransitionTime":"2026-02-16T22:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.082054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.082127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.082148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.082175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.082196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.185851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.185896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.185905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.185924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.185934 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.289337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.289402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.289420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.289447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.289468 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.386845 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:33:50.20484948 +0000 UTC Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.392866 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.392922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.392938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.392964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.392980 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.496071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.496138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.496168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.496202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.496225 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.601454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.601537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.601562 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.601596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.601620 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.693821 4865 generic.go:334] "Generic (PLEG): container finished" podID="fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5" containerID="de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478" exitCode=0 Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.693924 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerDied","Data":"de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.704311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.704359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.704376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.704399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.704417 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.717959 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.718597 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.718881 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.722059 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.743647 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.755671 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.756499 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.763388 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.783152 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.799484 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.808875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.808941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.808962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.808988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.809006 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.824837 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.847468 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.878591 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.895113 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.913682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.913753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.913779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.913818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.913844 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:27Z","lastTransitionTime":"2026-02-16T22:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.922817 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.942508 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.962973 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.981182 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:27 crc kubenswrapper[4865]: I0216 22:46:27.999899 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:27Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.013065 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.017017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.017049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.017062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.017081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.017093 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.023918 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.037992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.052105 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.073793 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.084422 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.084490 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.084571 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.084607 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.084562141 +0000 UTC m=+36.408269142 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.084647 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.084630893 +0000 UTC m=+36.408337894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.084764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.084887 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.084930 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.084918121 +0000 UTC m=+36.408625202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.087163 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.107922 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.119814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.119833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.119842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.119856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.119866 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.121250 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.133084 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.152692 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.169130 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.186498 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.186552 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.186722 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.186752 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.186768 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.186822 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.186805767 +0000 UTC m=+36.510512738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.187233 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.187259 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.187270 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.187373 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.187311401 +0000 UTC m=+36.511018372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.188613 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.207796 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.222858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.222984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.223005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.223029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.223046 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.227964 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.332815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.332915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.332954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.332992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.333017 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.387592 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:45:45.036946463 +0000 UTC Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.413867 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.413932 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.414066 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.414077 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.414249 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:28 crc kubenswrapper[4865]: E0216 22:46:28.414457 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.436820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.436893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.436909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.436934 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.436950 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.492706 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.540862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.540956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.540976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.541002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.541019 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.645388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.645460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.645478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.645510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.645529 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.729218 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" event={"ID":"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5","Type":"ContainerStarted","Data":"0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.729337 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.748486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.748553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.748576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.748607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.748628 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.753404 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.773316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.788084 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.811457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.844080 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.852417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.852632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.852689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.852770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.852856 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.866606 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.880876 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.895471 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.909872 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.926097 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.936883 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.947291 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.956357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.956449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.956471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.956498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.956541 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:28Z","lastTransitionTime":"2026-02-16T22:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.960472 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:28 crc kubenswrapper[4865]: I0216 22:46:28.975316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.060105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.060166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.060183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.060209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.060227 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.168115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.168623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.168773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.168917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.169063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.272647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.272699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.272716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.272742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.272759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.377765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.377830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.377847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.377874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.377894 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.388006 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:24:04.576485982 +0000 UTC Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.482066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.482418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.482584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.482733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.482852 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.586586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.586939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.587086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.587227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.587408 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.696359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.696457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.696480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.696532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.696552 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.733849 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.799505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.799589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.799611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.799645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.799666 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.902513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.902628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.902646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.902673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:29 crc kubenswrapper[4865]: I0216 22:46:29.902703 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:29Z","lastTransitionTime":"2026-02-16T22:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.005233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.005651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.005660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.005677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.005687 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.109041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.109083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.109094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.109115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.109126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.211138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.211177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.211189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.211207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.211218 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.312371 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.315675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.315752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.315778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.315810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.315834 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.389065 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:16:34.336218371 +0000 UTC Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.413934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.414025 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.414029 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:30 crc kubenswrapper[4865]: E0216 22:46:30.414110 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:30 crc kubenswrapper[4865]: E0216 22:46:30.414252 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:30 crc kubenswrapper[4865]: E0216 22:46:30.414439 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.417998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.418024 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.418033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.418043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.418052 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.433636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.455085 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.472073 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.494557 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.513758 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.520824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.520860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.520874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.520893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.520905 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.526937 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.556518 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.571512 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.589483 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.610258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.656725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.656877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.656956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.657047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.657123 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.657097 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.673000 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.690395 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.701015 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.739414 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/0.log" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.742587 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d" exitCode=1 Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.742630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.743345 4865 scope.go:117] "RemoveContainer" containerID="2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760260 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.760986 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.774619 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.787038 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.801877 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.815624 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.833976 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.857096 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.866973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.867021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.867033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.867054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.867069 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.877464 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.890900 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.922764 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:30Z\\\",\\\"message\\\":\\\"11] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602741 6166 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:30.602790 6166 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:30.602810 6166 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602879 6166 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:30.602912 6166 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:30.602934 6166 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:30.602982 6166 factory.go:656] Stopping watch factory\\\\nI0216 22:46:30.603005 6166 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:30.603018 6166 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:46:30.603030 6166 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:30.603045 6166 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:46:30.603057 6166 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.944559 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.977492 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.981929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.981984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.982001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.982026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:30 crc kubenswrapper[4865]: I0216 22:46:30.982046 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:30Z","lastTransitionTime":"2026-02-16T22:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.010265 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.033041 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.084494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.084540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.084549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.084568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.084580 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.187123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.187189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.187214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.187247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.187268 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.290987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.291068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.291087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.291113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.291131 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.389222 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:24:23.726425084 +0000 UTC Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.395160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.395209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.395228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.395254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.395305 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.497622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.497680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.497688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.497703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.497712 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.600000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.600045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.600056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.600074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.600085 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.702462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.702510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.702523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.702544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.702557 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.748695 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/0.log" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.752482 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.752737 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.771509 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.792052 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.805447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.805489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.805500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.805517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.805529 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.807148 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.823084 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.841262 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.862726 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.879538 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.892753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.908301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.908346 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.908359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.908376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.908387 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:31Z","lastTransitionTime":"2026-02-16T22:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.911939 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:30Z\\\",\\\"message\\\":\\\"11] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602741 6166 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:30.602790 6166 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:30.602810 6166 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602879 6166 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:30.602912 6166 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:30.602934 6166 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:30.602982 6166 factory.go:656] Stopping watch factory\\\\nI0216 22:46:30.603005 6166 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:30.603018 6166 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:46:30.603030 6166 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:30.603045 6166 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:46:30.603057 6166 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.924091 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.934755 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.945792 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.958946 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.968350 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:31 crc kubenswrapper[4865]: I0216 22:46:31.994461 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.010707 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.010923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.010979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.011049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.011101 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.113742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.113811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.113830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.113858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.113876 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.217101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.217178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.217197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.217226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.217247 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.320889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.321323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.321534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.321739 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.321929 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.389589 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:54:01.617087374 +0000 UTC Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.414010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.414058 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:32 crc kubenswrapper[4865]: E0216 22:46:32.414237 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:32 crc kubenswrapper[4865]: E0216 22:46:32.414419 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.414056 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:32 crc kubenswrapper[4865]: E0216 22:46:32.414774 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.426401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.426638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.426713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.426787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.426858 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.529984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.530461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.530648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.530861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.531039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.634484 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.634855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.635059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.635212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.635378 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.738729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.738818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.738841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.738876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.738901 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.758414 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/1.log" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.759681 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/0.log" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.763158 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93" exitCode=1 Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.763221 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.763291 4865 scope.go:117] "RemoveContainer" containerID="2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.764405 4865 scope.go:117] "RemoveContainer" containerID="0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93" Feb 16 22:46:32 crc kubenswrapper[4865]: E0216 22:46:32.764647 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.801548 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2474653f61e75b7161c87fdc1b44a9a0081e61d87bef578f9915eeec0fccdf2d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:30Z\\\",\\\"message\\\":\\\"11] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602741 6166 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:30.602790 6166 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:30.602810 6166 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:30.602879 6166 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:30.602912 6166 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:30.602934 6166 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:30.602982 6166 factory.go:656] Stopping watch factory\\\\nI0216 22:46:30.603005 6166 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:30.603018 6166 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:46:30.603030 6166 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:30.603045 6166 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:46:30.603057 6166 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.827877 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.842100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.842170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.842190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.842217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.842237 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.848457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.864472 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.884264 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.904234 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.922236 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.942662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.947340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.947398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.947433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.947472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.947491 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:32Z","lastTransitionTime":"2026-02-16T22:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.970133 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:32 crc kubenswrapper[4865]: I0216 22:46:32.993005 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.008927 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.027055 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.043123 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.050717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.050798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.050824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.050856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.050879 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.068036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.154721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.154798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.154816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.154844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.154862 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.257538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.257687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.257710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.257737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.257755 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.360105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.360167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.360184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.360208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.360226 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.390358 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:50:08.820131717 +0000 UTC Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.463835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.463912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.463936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.463966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.463985 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.475108 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.567737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.567815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.567835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.568213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.568518 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.672029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.672112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.672130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.672160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.672181 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.771073 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/1.log" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.775587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.775648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.775666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.775694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.775714 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.779472 4865 scope.go:117] "RemoveContainer" containerID="0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93" Feb 16 22:46:33 crc kubenswrapper[4865]: E0216 22:46:33.779743 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.801940 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.826428 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.843241 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.875221 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.878870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.878931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.878951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.878982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.879001 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.896011 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.915353 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.937176 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.958340 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.973617 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.981774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.981812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.981823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.981840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.981854 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:33Z","lastTransitionTime":"2026-02-16T22:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:33 crc kubenswrapper[4865]: I0216 22:46:33.995496 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.013402 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.027679 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.044457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.071326 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.089729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.089777 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.089788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.089807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.089819 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.093512 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n"] Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.093981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.098191 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.098470 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.114897 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.128225 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.147098 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.157921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzm5\" (UniqueName: \"kubernetes.io/projected/d979a250-e586-4f45-b78e-ce99dbdbe9a4-kube-api-access-zvzm5\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.158037 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.158101 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.158175 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.161036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.179658 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.192551 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.193253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.193349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.193371 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.193401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.193418 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.212023 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.229668 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.246436 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.259200 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzm5\" (UniqueName: \"kubernetes.io/projected/d979a250-e586-4f45-b78e-ce99dbdbe9a4-kube-api-access-zvzm5\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.259313 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.259370 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.259404 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.265914 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.266633 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.268072 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d979a250-e586-4f45-b78e-ce99dbdbe9a4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.268411 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d979a250-e586-4f45-b78e-ce99dbdbe9a4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.287821 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.295844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzm5\" (UniqueName: \"kubernetes.io/projected/d979a250-e586-4f45-b78e-ce99dbdbe9a4-kube-api-access-zvzm5\") pod \"ovnkube-control-plane-749d76644c-shf2n\" (UID: \"d979a250-e586-4f45-b78e-ce99dbdbe9a4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.297022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.297060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.297072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.297090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.297103 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.309055 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.325062 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.340162 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.357036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.391435 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:00:02.693521348 +0000 UTC Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.400191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.400268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.400318 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.400345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.400363 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.412391 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.413806 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.413863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.413863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.413995 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.414769 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.414930 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:34 crc kubenswrapper[4865]: W0216 22:46:34.444480 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd979a250_e586_4f45_b78e_ce99dbdbe9a4.slice/crio-06f818d13189d312af225efe5c0f3eccd9e714e76981747da2ada3004229940e WatchSource:0}: Error finding container 06f818d13189d312af225efe5c0f3eccd9e714e76981747da2ada3004229940e: Status 404 returned error can't find the container with id 06f818d13189d312af225efe5c0f3eccd9e714e76981747da2ada3004229940e Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.504161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.504224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.504242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.504268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.504318 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.607835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.607900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.607915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.607939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.607959 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.710946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.711094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.711121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.711189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.711214 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.783745 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" event={"ID":"d979a250-e586-4f45-b78e-ce99dbdbe9a4","Type":"ContainerStarted","Data":"7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.783876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" event={"ID":"d979a250-e586-4f45-b78e-ce99dbdbe9a4","Type":"ContainerStarted","Data":"06f818d13189d312af225efe5c0f3eccd9e714e76981747da2ada3004229940e"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.815305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.815366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.815384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.815452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.815474 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.827902 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ggbcr"] Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.828492 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.828565 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.853766 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.866480 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.866667 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkt54\" (UniqueName: \"kubernetes.io/projected/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-kube-api-access-fkt54\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.873873 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.888581 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.902993 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.924904 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.927995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.928605 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.928657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.928680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.928693 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:34Z","lastTransitionTime":"2026-02-16T22:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.944884 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.968427 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.968557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkt54\" (UniqueName: \"kubernetes.io/projected/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-kube-api-access-fkt54\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.968601 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:34 crc kubenswrapper[4865]: E0216 22:46:34.968676 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:35.46865572 +0000 UTC m=+35.792362691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.970438 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.989799 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:34Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:34 crc kubenswrapper[4865]: I0216 22:46:34.995485 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkt54\" (UniqueName: \"kubernetes.io/projected/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-kube-api-access-fkt54\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.003953 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.019600 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.031720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.031890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.032020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.032221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.032342 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.036255 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.061744 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.082882 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.098993 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.131688 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.135676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.135748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.135772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.135802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.135826 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.153572 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.238659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.238710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.238725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.238746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.238759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.341374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.341423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.341437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.341457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.341474 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.392696 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:42:49.741780294 +0000 UTC Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.444039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.444086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.444100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.444120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.444134 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.473905 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:35 crc kubenswrapper[4865]: E0216 22:46:35.474151 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:35 crc kubenswrapper[4865]: E0216 22:46:35.474328 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:36.4743065 +0000 UTC m=+36.798013471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.546669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.546989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.547221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.547499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.547754 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.651132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.651435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.651594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.651841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.651999 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.756502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.756565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.756583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.756614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.756633 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.790735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" event={"ID":"d979a250-e586-4f45-b78e-ce99dbdbe9a4","Type":"ContainerStarted","Data":"df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.814570 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.835984 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.852650 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.860091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.860147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.860167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.860193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.860210 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.888273 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.906211 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.926864 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.945775 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.963005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.963048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.963066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.963090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.963107 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:35Z","lastTransitionTime":"2026-02-16T22:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.964721 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:35 crc kubenswrapper[4865]: I0216 22:46:35.986484 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:35Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.004153 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.023484 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.042579 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.058539 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.065723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.065789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.065812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.065844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.065866 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.077705 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.095088 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.116722 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.169992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.170068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.170092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.170124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.170148 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.182581 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.182757 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.182787 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:46:52.182749833 +0000 UTC m=+52.506456824 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.182850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.182922 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.183054 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:52.183030181 +0000 UTC m=+52.506737172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.183053 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.183718 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:52.18370032 +0000 UTC m=+52.507407321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.274021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.274127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.274144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.274170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.274187 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.283943 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.284051 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284178 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284216 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284238 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284353 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:52.28432241 +0000 UTC m=+52.608029411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284402 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284440 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284465 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.284571 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:52.284542066 +0000 UTC m=+52.608249067 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.367417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.367488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.367505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.367532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.367553 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.389554 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.393345 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:57:41.390849713 +0000 UTC Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.395220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.395348 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.395397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.395463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.395485 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.414387 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.414383 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.414620 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.414721 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.414572 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.414862 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.415052 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.415335 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.418994 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.428209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.428429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.428571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.428711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.428866 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.449438 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.454430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.454497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.454516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.454571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.454590 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.475632 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.481403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.481477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.481501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.481523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.481541 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.486695 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.487031 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.487145 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:38.487113463 +0000 UTC m=+38.810820474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.502557 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:36Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:36 crc kubenswrapper[4865]: E0216 22:46:36.502788 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.504934 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.504989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.505009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.505033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.505052 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.608773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.608841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.608859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.608886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.608904 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.712135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.712200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.712221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.712248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.712267 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.815417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.815507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.815531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.815558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.815576 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.918557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.918622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.918639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.918666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:36 crc kubenswrapper[4865]: I0216 22:46:36.918685 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:36Z","lastTransitionTime":"2026-02-16T22:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.021951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.022323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.022492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.022650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.022809 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.125945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.126002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.126019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.126043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.126062 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.228882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.228947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.228967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.228993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.229011 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.332243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.332332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.332350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.332401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.332422 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.393877 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:27:26.852896255 +0000 UTC Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.435608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.435649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.435670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.435695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.435733 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.538583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.538914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.539084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.539306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.539438 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.643105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.643204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.643223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.643250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.643269 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.746144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.746192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.746210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.746234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.746249 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.850033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.850083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.850096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.850117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.850132 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.953902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.953971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.953989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.954016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:37 crc kubenswrapper[4865]: I0216 22:46:37.954033 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:37Z","lastTransitionTime":"2026-02-16T22:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.056751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.056868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.056923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.056953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.056974 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.159957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.160016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.160028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.160046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.160063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.263061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.263129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.263147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.263175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.263198 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.367026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.367122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.367146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.367184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.367206 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.394838 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:34:57.49950896 +0000 UTC Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.413558 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.413628 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.413833 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.413827 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.414092 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.414258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.414576 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.414767 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.415129 4865 scope.go:117] "RemoveContainer" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.471116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.471173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.471191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.471217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.471235 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.511811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.512061 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:38 crc kubenswrapper[4865]: E0216 22:46:38.512142 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:42.512117041 +0000 UTC m=+42.835824032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.574568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.574648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.574668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.574700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.574724 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.678193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.678244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.678261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.678311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.678335 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.781933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.782021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.782044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.782078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.782100 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.812363 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.815148 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.815684 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.833631 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.850529 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.869201 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.885893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.886197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.886335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.886458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.886569 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.897522 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.915627 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.948151 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.964241 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.977238 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.988498 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.988972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.988997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.989005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.989019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:38 crc kubenswrapper[4865]: I0216 22:46:38.989028 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:38Z","lastTransitionTime":"2026-02-16T22:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.000495 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.011704 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.023705 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.037104 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.047823 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.060386 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.077208 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.091249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.091414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.091474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.091540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.091596 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.194564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.194936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.195086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.195231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.195432 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.299119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.299666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.299850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.299992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.300122 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.395267 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:05:43.272695896 +0000 UTC Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.404100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.404176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.404198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.404228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.404249 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.507939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.508368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.508512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.508709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.508834 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.613055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.613122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.613144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.613174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.613193 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.716456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.716523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.716541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.716569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.716587 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.819629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.819690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.819708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.819734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.819752 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.923464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.923515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.923531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.923558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:39 crc kubenswrapper[4865]: I0216 22:46:39.923577 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:39Z","lastTransitionTime":"2026-02-16T22:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.026953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.027018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.027042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.027072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.027092 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.130715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.130780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.130803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.130835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.130857 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.234617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.234672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.234688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.234711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.234728 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.343932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.343995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.344007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.344029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.344039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.396799 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:00:46.247951833 +0000 UTC Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.414184 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.414205 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:40 crc kubenswrapper[4865]: E0216 22:46:40.414383 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.414440 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.414516 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:40 crc kubenswrapper[4865]: E0216 22:46:40.414599 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:40 crc kubenswrapper[4865]: E0216 22:46:40.414719 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:40 crc kubenswrapper[4865]: E0216 22:46:40.414875 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.446780 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.447429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.447496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.447514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.447545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.447563 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.467195 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.483072 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.503108 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.520088 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.545408 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.549682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.549746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.549766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.549791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.549809 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.567005 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.583881 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.616265 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.640909 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.652525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.652592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.652611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.652639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.652658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.662504 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.683469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.700869 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.717599 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.736992 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.755469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.757325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.757373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.757390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.757416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.757436 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.867231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.867368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.867397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.867443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.867469 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.971308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.971365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.971388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.971418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:40 crc kubenswrapper[4865]: I0216 22:46:40.971439 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:40Z","lastTransitionTime":"2026-02-16T22:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.074903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.074956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.074967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.074987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.074999 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.178713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.178771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.178789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.178814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.178830 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.282197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.282264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.282317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.282351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.282374 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.385773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.385857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.385874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.385902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.385924 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.397755 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:18:50.688506972 +0000 UTC Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.489547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.489615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.489634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.489665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.489683 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.592728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.592796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.592820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.592854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.592876 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.695971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.696032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.696050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.696077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.696096 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.798960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.799002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.799018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.799039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.799055 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.902024 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.902088 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.902111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.902142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:41 crc kubenswrapper[4865]: I0216 22:46:41.902165 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:41Z","lastTransitionTime":"2026-02-16T22:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.005048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.005096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.005113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.005136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.005153 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.108907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.108973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.108989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.109018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.109040 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.211965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.212003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.212014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.212029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.212039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.315516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.315641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.315661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.315689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.315749 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.398626 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:09:31.047948088 +0000 UTC Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.413610 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.413717 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.413808 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.413610 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.413861 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.414035 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.414185 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.414365 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.419574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.419628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.419654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.419684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.419705 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.523234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.523349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.523372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.523405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.523429 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.567850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.568112 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:42 crc kubenswrapper[4865]: E0216 22:46:42.568218 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:46:50.568190052 +0000 UTC m=+50.891897053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.627530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.627611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.627640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.627674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.627698 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.731595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.731664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.731687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.731727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.731756 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.834225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.834322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.834342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.834369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.834386 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.938205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.938261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.938304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.938327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:42 crc kubenswrapper[4865]: I0216 22:46:42.938345 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:42Z","lastTransitionTime":"2026-02-16T22:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.042075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.042145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.042162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.042191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.042209 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.145737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.145796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.145817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.145846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.145863 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.249547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.249608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.249631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.249658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.249676 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.358984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.359076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.359103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.359140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.359165 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.399691 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:03:49.007437497 +0000 UTC Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.462114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.462170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.462188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.462215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.462232 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.565092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.565155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.565174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.565202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.565222 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.668390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.668459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.668477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.668506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.668526 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.771932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.772008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.772033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.772065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.772093 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.876053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.876132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.876155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.876212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.876239 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.980259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.980373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.980393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.980424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:43 crc kubenswrapper[4865]: I0216 22:46:43.980445 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:43Z","lastTransitionTime":"2026-02-16T22:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.083877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.083958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.083982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.084016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.084039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.187516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.187574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.187596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.187628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.187649 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.290591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.290657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.290676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.290702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.290720 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.393780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.393854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.393875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.393905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.393923 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.400422 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:12:56.474168292 +0000 UTC Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.414045 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:44 crc kubenswrapper[4865]: E0216 22:46:44.414217 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.414582 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.414617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.414657 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:44 crc kubenswrapper[4865]: E0216 22:46:44.414780 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:44 crc kubenswrapper[4865]: E0216 22:46:44.415095 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:44 crc kubenswrapper[4865]: E0216 22:46:44.414991 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.497206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.497262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.497306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.497338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.497357 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.600419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.600516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.600535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.600560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.600577 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.704145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.704312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.704331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.704360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.704383 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.807954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.808030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.808052 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.808083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.808102 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.911887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.911966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.911984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.912019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:44 crc kubenswrapper[4865]: I0216 22:46:44.912039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:44Z","lastTransitionTime":"2026-02-16T22:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.015640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.015713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.015730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.015759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.015777 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.120161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.120225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.120244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.120303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.120322 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.224456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.224526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.224545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.224579 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.224597 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.327813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.327900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.327926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.327957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.327980 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.400611 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:53:36.550357677 +0000 UTC Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.431078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.431130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.431146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.431170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.431190 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.533958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.534080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.534099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.534126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.534144 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.636886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.636948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.636967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.636993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.637013 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.740334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.740401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.740420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.740447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.740465 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.843522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.843603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.843627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.843662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.843681 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.946904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.946970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.946987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.947016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:45 crc kubenswrapper[4865]: I0216 22:46:45.947033 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:45Z","lastTransitionTime":"2026-02-16T22:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.049522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.049573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.049590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.049614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.049632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.152878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.152972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.152996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.153032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.153055 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.256560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.256632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.256649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.256681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.256700 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.359770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.359839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.359858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.359886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.359907 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.401699 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:45:12.957012228 +0000 UTC Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.414349 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.414351 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.414454 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.414584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.414840 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.415033 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.415571 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.415670 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.463437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.463534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.463552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.463609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.463626 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.567423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.567502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.567544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.567596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.567613 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.671403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.671463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.671483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.671508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.671527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.762937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.762987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.763008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.763034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.763056 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.782891 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:46Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.789004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.789106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.789154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.789186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.789207 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.808475 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:46Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.840211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.840313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.840344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.840378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.840402 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.864900 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:46Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.870713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.870775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.870796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.870855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.870876 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.890917 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:46Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.900633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.900756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.901580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.901624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.901650 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.923954 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:46Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:46 crc kubenswrapper[4865]: E0216 22:46:46.924139 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.926042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.926075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.926084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.926101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:46 crc kubenswrapper[4865]: I0216 22:46:46.926112 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:46Z","lastTransitionTime":"2026-02-16T22:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.029139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.029191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.029202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.029221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.029233 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.131613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.131657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.131665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.131682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.131694 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.234411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.234481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.234496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.234517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.234531 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.337256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.337355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.337373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.337400 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.337418 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.401917 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:18:51.288668512 +0000 UTC Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.415406 4865 scope.go:117] "RemoveContainer" containerID="0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.440127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.440188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.440211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.440238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.440257 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.543804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.544229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.544243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.544260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.544271 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.647594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.647642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.647660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.647686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.647704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.750839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.750875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.750885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.750902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.750915 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.855652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.855711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.855728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.855751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.855768 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.859877 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/1.log" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.865596 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.866628 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.894571 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.913503 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.932161 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.974770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.974833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.974854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.974881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.974899 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:47Z","lastTransitionTime":"2026-02-16T22:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:47 crc kubenswrapper[4865]: I0216 22:46:47.986658 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.010030 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.026441 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.039022 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.052222 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.061806 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.073675 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.078339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.078373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.078389 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.078410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.078424 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.089397 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.108730 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.123823 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.151181 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.166196 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.180655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.180689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.180701 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.180720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.180734 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.183576 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.283856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.283950 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.284005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.284048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.284075 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.387095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.387153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.387172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.387201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.387218 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.403132 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:08:33.079276957 +0000 UTC Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.413543 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.413647 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.413548 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:48 crc kubenswrapper[4865]: E0216 22:46:48.413755 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.413656 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:48 crc kubenswrapper[4865]: E0216 22:46:48.413888 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:48 crc kubenswrapper[4865]: E0216 22:46:48.414007 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:48 crc kubenswrapper[4865]: E0216 22:46:48.414146 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.490389 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.490459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.490478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.490506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.490525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.593760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.593813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.593825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.593845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.593858 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.697496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.697568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.697588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.697621 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.697647 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.800817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.800899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.800921 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.800953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.800974 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.872851 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/2.log" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.874082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/1.log" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.879200 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" exitCode=1 Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.879334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.879422 4865 scope.go:117] "RemoveContainer" containerID="0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.880659 4865 scope.go:117] "RemoveContainer" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" Feb 16 22:46:48 crc kubenswrapper[4865]: E0216 22:46:48.881019 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.902849 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.903609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.903662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.903680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.903708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.903727 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:48Z","lastTransitionTime":"2026-02-16T22:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.924715 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.942347 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.959848 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:48 crc kubenswrapper[4865]: I0216 22:46:48.978104 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:48Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.003267 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.007209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.007504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.007536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.007573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.007597 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.023957 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.039369 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.070790 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b27cdcf9ef2a0fd1f2d90bebc43dfdeb6c8a74de346c0f9d18731cbb2748e93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:31Z\\\",\\\"message\\\":\\\"b79af60206a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 22:46:31.878240 6289 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []services.LB{}\\\\nF0216 22:46:31.878297 6289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:31Z is after 2025-08-24T17:21:41Z]\\\\nI0216 22:46:31.878302 6289 services_controller.go:452] Built service openshift-cluster-version/cluster-version-operator per-node LB for network=default: []services.LB{}\\\\nI0216 22:46:31.878304 6289 services_controller.go:454] Service openshift-marketplace/redhat-marketplace for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (te\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.091760 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.107725 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.110932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.111091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.111117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.111159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.111179 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.135153 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.151196 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.165172 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.182396 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.196850 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.224154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.224207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.224219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.224241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.224256 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.327374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.327657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.327745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.327840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.327936 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.404051 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:33:56.594079169 +0000 UTC Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.431040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.431139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.431165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.431199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.431224 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.534146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.534192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.534211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.534244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.534262 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.638083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.638161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.638181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.638213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.638237 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.741861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.741932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.741949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.741978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.741996 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.846637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.846727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.846752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.846785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.846807 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.886602 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/2.log" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.891825 4865 scope.go:117] "RemoveContainer" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" Feb 16 22:46:49 crc kubenswrapper[4865]: E0216 22:46:49.892092 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.911369 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.928168 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.950960 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.951357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.951440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.951497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.951529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.951548 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:49Z","lastTransitionTime":"2026-02-16T22:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.973477 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:49 crc kubenswrapper[4865]: I0216 22:46:49.992910 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.008949 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.038133 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.055443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.055516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.055542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.055577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.055601 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.056862 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.075321 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.093534 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.111471 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.131482 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.148855 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.158753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.158828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.158850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.158885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.158910 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.169447 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.188774 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.204184 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.263851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.263906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.263924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.263951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.263969 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.366852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.367742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.368107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.368342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.368588 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.404920 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:35:50.456694701 +0000 UTC Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.414343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.414522 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.414720 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.414926 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.415441 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.415440 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.415588 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.415681 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.444588 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.467562 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.471402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.471455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.471475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.471501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.471519 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.484753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.521308 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.546514 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.562058 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.574878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.574924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.574942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.574969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.574986 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.575547 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.596524 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.600927 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.601047 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:50 crc kubenswrapper[4865]: E0216 22:46:50.601097 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:06.601080391 +0000 UTC m=+66.924787342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.616624 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.632886 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.648127 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.659854 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.674644 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.678493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.678550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.678570 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.678595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.678614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.689097 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.707887 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.724533 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:50Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.781134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.781178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.781187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.781203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.781213 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.885059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.885132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.885150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.885179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.885196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.988095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.988152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.988169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.988194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:50 crc kubenswrapper[4865]: I0216 22:46:50.988210 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:50Z","lastTransitionTime":"2026-02-16T22:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.090879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.090944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.090961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.090994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.091015 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.194009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.194073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.194090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.194114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.194131 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.297106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.297153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.297169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.297194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.297210 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.400443 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.400504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.400521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.400548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.400565 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.405965 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:10:50.059198798 +0000 UTC Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.503845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.503901 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.503912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.503932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.503945 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.607516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.607584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.607602 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.607628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.607646 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.711447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.711554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.711574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.711644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.711663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.815725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.815988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.816213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.816492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.816979 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.819640 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.853768 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.875853 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.896491 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.914780 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.920382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.920578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.920919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.921323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.921641 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:51Z","lastTransitionTime":"2026-02-16T22:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.933695 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.949807 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.971489 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:51 crc kubenswrapper[4865]: I0216 22:46:51.990771 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:51Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.010073 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.025011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.025421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.025603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.025821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.026020 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.033256 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.054246 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.077496 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.096325 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.115672 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.129961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.130022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.130040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.130066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.130086 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.135318 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.159685 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:52Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.221227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.221441 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:47:24.221411139 +0000 UTC m=+84.545118140 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.221756 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.221999 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.222058 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.222796 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:24.222771257 +0000 UTC m=+84.546478248 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.222166 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.223155 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:24.223133748 +0000 UTC m=+84.546840739 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.234078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.234133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.234152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.234178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.234200 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.323395 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.323471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323677 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323706 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323725 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323756 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323807 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:24.323784598 +0000 UTC m=+84.647491589 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323817 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323851 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.323950 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:24.323916942 +0000 UTC m=+84.647623973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.349048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.349110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.349128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.349154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.349170 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.407444 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:28:40.926608677 +0000 UTC Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.414019 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.414150 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.414018 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.414024 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.414339 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.414493 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.414577 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:52 crc kubenswrapper[4865]: E0216 22:46:52.414734 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.452081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.452142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.452161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.452187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.452204 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.555259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.555340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.555357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.555381 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.555400 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.658940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.659002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.659019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.659047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.659066 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.762916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.762984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.763002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.763028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.763049 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.866600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.866663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.866688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.866715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.866735 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.970417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.970468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.970485 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.970509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:52 crc kubenswrapper[4865]: I0216 22:46:52.970525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:52Z","lastTransitionTime":"2026-02-16T22:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.073473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.073538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.073558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.073583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.073601 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.176772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.176838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.176863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.176893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.176915 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.280065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.280149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.280173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.280205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.280227 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.383465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.383551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.383574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.383609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.383628 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.408980 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:59:03.707983257 +0000 UTC Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.486566 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.486649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.486675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.486706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.486724 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.590426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.590496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.590512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.590540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.590557 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.693890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.693953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.693972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.693998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.694016 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.797166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.797236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.797254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.797310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.797335 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.900724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.900783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.900809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.900842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:53 crc kubenswrapper[4865]: I0216 22:46:53.900863 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:53Z","lastTransitionTime":"2026-02-16T22:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.004371 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.004427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.004445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.004471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.004492 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.108929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.108997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.109016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.109042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.109063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.212470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.212526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.212541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.212565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.212582 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.316395 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.316468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.316486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.316891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.316948 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.410175 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:49:12.927415 +0000 UTC Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.413535 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.413598 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.413610 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.413654 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:54 crc kubenswrapper[4865]: E0216 22:46:54.413700 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:54 crc kubenswrapper[4865]: E0216 22:46:54.413911 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:54 crc kubenswrapper[4865]: E0216 22:46:54.414041 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:54 crc kubenswrapper[4865]: E0216 22:46:54.414237 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.421051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.421108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.421127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.421155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.421174 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.524793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.525064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.525089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.525121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.525139 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.628479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.628555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.628578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.628607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.628632 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.731962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.732041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.732065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.732100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.732130 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.835396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.835804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.835821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.835845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.835862 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.938240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.938339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.938367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.938401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.938423 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:54Z","lastTransitionTime":"2026-02-16T22:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.961831 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.974474 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 22:46:54 crc kubenswrapper[4865]: I0216 22:46:54.981756 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.013813 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.036500 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.042162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.042219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.042235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.042261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.042308 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.062320 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.085057 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.107569 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.123336 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.140189 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.149083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.149133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.149149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.149174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.149193 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.160226 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.178216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.193296 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.213063 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.228547 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.254920 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.258117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.258179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.258205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.258236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.258254 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.274850 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.298309 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.361451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.361520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.361536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.361561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.361578 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.411166 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:33:33.884214626 +0000 UTC Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.464179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.464241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.464259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.464336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.464366 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.567413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.567824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.568064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.568246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.568487 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.671804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.671859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.671877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.671904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.671922 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.774783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.774852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.774875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.774905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.774927 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.878494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.878555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.878572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.878595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.878613 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.981457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.981524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.981542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.981568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:55 crc kubenswrapper[4865]: I0216 22:46:55.981585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:55Z","lastTransitionTime":"2026-02-16T22:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.084973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.085075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.085102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.085136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.085160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.187823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.187884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.187909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.187939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.187961 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.291247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.291328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.291345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.291369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.291385 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.394529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.394574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.394584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.394600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.394611 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.412344 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:42:48.520611057 +0000 UTC Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.413703 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.413743 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.413909 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:56 crc kubenswrapper[4865]: E0216 22:46:56.414027 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.414076 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:56 crc kubenswrapper[4865]: E0216 22:46:56.414220 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:56 crc kubenswrapper[4865]: E0216 22:46:56.414302 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:56 crc kubenswrapper[4865]: E0216 22:46:56.414365 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.497603 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.497668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.497686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.497714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.497732 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.602132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.602199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.602216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.602245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.602268 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.706139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.706218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.706242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.706328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.706851 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.810855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.810999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.811021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.811055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.811074 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.914927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.915001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.915018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.915044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:56 crc kubenswrapper[4865]: I0216 22:46:56.915062 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:56Z","lastTransitionTime":"2026-02-16T22:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.020426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.020522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.020544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.020574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.020604 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.124656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.124740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.124756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.124784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.124803 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.194809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.194910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.194931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.194957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.194977 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.215521 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:57Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.222456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.222520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.222536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.222557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.222569 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.241383 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:57Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.248124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.248200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.248217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.248243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.248258 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.263075 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:57Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.268259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.268312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.268322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.268341 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.268353 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.285077 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:57Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.289513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.289588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.289606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.289640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.289660 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.308924 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:46:57Z is after 2025-08-24T17:21:41Z" Feb 16 22:46:57 crc kubenswrapper[4865]: E0216 22:46:57.309303 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.312510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.312581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.312607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.312640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.312659 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.413171 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:47:19.123135422 +0000 UTC Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.415869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.415974 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.415995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.416056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.416078 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.519975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.520049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.520069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.520096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.520113 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.624164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.624221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.624237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.624261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.624314 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.726923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.726972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.726988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.727011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.727027 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.830677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.830745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.830762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.830790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.830816 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.934259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.934358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.934381 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.934419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:57 crc kubenswrapper[4865]: I0216 22:46:57.934440 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:57Z","lastTransitionTime":"2026-02-16T22:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.037430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.037490 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.037505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.037528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.037542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.140947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.141055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.141082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.141114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.141137 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.245349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.245437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.245461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.245493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.245517 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.349760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.349822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.349841 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.349867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.349886 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.413937 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:43:39.813053548 +0000 UTC Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.414212 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.414256 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.414322 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.414230 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:46:58 crc kubenswrapper[4865]: E0216 22:46:58.414492 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:46:58 crc kubenswrapper[4865]: E0216 22:46:58.414643 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:46:58 crc kubenswrapper[4865]: E0216 22:46:58.414793 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:46:58 crc kubenswrapper[4865]: E0216 22:46:58.414915 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.453030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.453093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.453117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.453142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.453161 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.556352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.556407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.556419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.556439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.556450 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.660684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.660824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.660852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.660890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.660935 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.764964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.765056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.765081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.765134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.765163 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.868558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.868712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.868750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.868786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.868807 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.971420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.971505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.971540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.971569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:58 crc kubenswrapper[4865]: I0216 22:46:58.971588 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:58Z","lastTransitionTime":"2026-02-16T22:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.074913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.074987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.075058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.075088 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.075107 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.178557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.178627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.178647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.178673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.178692 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.281828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.281894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.281912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.281936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.281953 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.385473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.385543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.385560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.385589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.385607 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.414822 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:55:37.393577357 +0000 UTC Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.488422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.488487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.488509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.488538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.488561 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.591928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.591998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.592021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.592050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.592072 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.695109 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.695182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.695201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.695228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.695249 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.799020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.799085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.799103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.799127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.799145 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.902056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.902106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.902124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.902149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:46:59 crc kubenswrapper[4865]: I0216 22:46:59.902167 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:46:59Z","lastTransitionTime":"2026-02-16T22:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.004769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.004808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.004820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.004837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.004848 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.107818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.107930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.108000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.108041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.108172 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.211467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.211511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.211532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.211555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.211570 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.314740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.314802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.314824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.314851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.314873 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.414197 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:00 crc kubenswrapper[4865]: E0216 22:47:00.419990 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.414393 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:00 crc kubenswrapper[4865]: E0216 22:47:00.420265 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.419416 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:27:14.124156028 +0000 UTC Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.414233 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.414623 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:00 crc kubenswrapper[4865]: E0216 22:47:00.420584 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:00 crc kubenswrapper[4865]: E0216 22:47:00.422086 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.423385 4865 scope.go:117] "RemoveContainer" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" Feb 16 22:47:00 crc kubenswrapper[4865]: E0216 22:47:00.425043 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.430370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.430488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.430519 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.430555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.430641 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.442489 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.462557 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.479174 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.500568 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.520731 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.535075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.535205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.535225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.535251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.535269 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.543457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.565187 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.585208 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.602304 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.620459 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638605 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638655 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.638704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.665099 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.683713 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.722521 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.743363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.743897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.744055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.744209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.744239 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.745667 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.760258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.780195 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.847445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.848013 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.848185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.848510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.848762 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.952235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.952815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.953035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.953194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:00 crc kubenswrapper[4865]: I0216 22:47:00.953477 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:00Z","lastTransitionTime":"2026-02-16T22:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.056588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.056666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.056691 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.056723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.056743 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.165650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.166169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.166413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.166650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.166850 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.270763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.270850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.270867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.271147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.271182 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.374049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.374123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.374142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.374172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.374190 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.421379 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:52:33.388557108 +0000 UTC Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.477964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.478035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.478056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.478086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.478107 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.581557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.581628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.581646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.581676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.581694 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.719657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.719751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.719776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.719808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.719832 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.822466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.822535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.822558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.822589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.822612 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.925836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.925901 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.925920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.925945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:01 crc kubenswrapper[4865]: I0216 22:47:01.925962 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:01Z","lastTransitionTime":"2026-02-16T22:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.029484 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.029567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.029586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.029612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.029634 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.132683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.132761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.132787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.132824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.132843 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.236351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.236406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.236425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.236451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.236468 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.339611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.339676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.339693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.339720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.339738 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.413945 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.414010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.414010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.414175 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:02 crc kubenswrapper[4865]: E0216 22:47:02.414184 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:02 crc kubenswrapper[4865]: E0216 22:47:02.414408 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:02 crc kubenswrapper[4865]: E0216 22:47:02.414508 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:02 crc kubenswrapper[4865]: E0216 22:47:02.414665 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.421927 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:18:35.913501923 +0000 UTC Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.442918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.443041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.443062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.443123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.443142 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.546955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.547049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.547071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.547099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.547165 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.650634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.650713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.650735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.650768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.650791 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.754686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.754778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.754796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.754882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.754900 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.857772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.857858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.857878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.857903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.857950 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.962065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.962132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.962150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.962182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:02 crc kubenswrapper[4865]: I0216 22:47:02.962200 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:02Z","lastTransitionTime":"2026-02-16T22:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.065327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.065395 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.065421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.065456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.065480 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.168321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.168384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.168402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.168428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.168446 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.270857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.270913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.270933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.270959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.270979 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.374650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.374709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.374718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.374736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.374745 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.422984 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:10:10.981525045 +0000 UTC Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.477939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.478004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.478023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.478050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.478072 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.580994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.581055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.581072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.581100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.581117 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.684747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.684822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.684848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.684881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.684903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.788440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.788488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.788500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.788517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.788527 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.891388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.891474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.891498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.891532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.891554 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.994194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.994323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.994343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.994371 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:03 crc kubenswrapper[4865]: I0216 22:47:03.994392 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:03Z","lastTransitionTime":"2026-02-16T22:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.097454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.097522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.097546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.097573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.097590 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.200244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.200329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.200348 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.200372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.200391 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.303230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.303353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.303378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.303412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.303454 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.406798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.406865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.406887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.406920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.406949 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.413920 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:04 crc kubenswrapper[4865]: E0216 22:47:04.414148 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.414510 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.414566 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.414671 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:04 crc kubenswrapper[4865]: E0216 22:47:04.414869 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:04 crc kubenswrapper[4865]: E0216 22:47:04.415019 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:04 crc kubenswrapper[4865]: E0216 22:47:04.415159 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.423463 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:01:27.467809252 +0000 UTC Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.509872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.509917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.509929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.509948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.509960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.612919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.612986 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.613008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.613044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.613066 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.716022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.716067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.716079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.716098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.716113 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.819240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.819331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.819349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.819377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.819395 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.922066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.922106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.922121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.922142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:04 crc kubenswrapper[4865]: I0216 22:47:04.922160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:04Z","lastTransitionTime":"2026-02-16T22:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.025672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.025798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.025824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.025861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.025883 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.127844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.128156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.128227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.128311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.128404 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.231234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.231342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.231366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.231395 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.231416 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.334996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.335037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.335046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.335066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.335076 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.423932 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:48:17.4408361 +0000 UTC Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.439039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.439108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.439126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.439157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.439178 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.541384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.541761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.541898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.542030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.542151 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.645582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.645647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.645668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.645693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.645710 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.747993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.748041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.748053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.748074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.748086 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.850521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.850559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.850567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.850582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.850593 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.953269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.953350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.953362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.953383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:05 crc kubenswrapper[4865]: I0216 22:47:05.953395 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:05Z","lastTransitionTime":"2026-02-16T22:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.056344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.056418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.056444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.056478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.056506 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.162389 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.162449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.162461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.162483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.162505 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.265433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.265498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.265510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.265531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.265546 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.368704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.368782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.368805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.368836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.368857 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.414526 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.414996 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.415364 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.415431 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.415517 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.415557 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.415621 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.415750 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.424802 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:38:35.511858316 +0000 UTC Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.471631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.471704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.471723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.471752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.471775 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.574299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.574365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.574398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.574416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.574429 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.677686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.677740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.677754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.677775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.677789 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.683570 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.683746 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:47:06 crc kubenswrapper[4865]: E0216 22:47:06.683865 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:47:38.683835788 +0000 UTC m=+99.007542779 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.781071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.781127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.781144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.781176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.781194 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.885102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.885154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.885169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.885190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.885204 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.988103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.988207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.988257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.988325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:06 crc kubenswrapper[4865]: I0216 22:47:06.988346 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:06Z","lastTransitionTime":"2026-02-16T22:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.091177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.091491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.091557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.091629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.091693 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.194517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.194580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.194596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.194623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.194639 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.298481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.298542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.298562 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.298590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.298609 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.402071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.402130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.402146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.402171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.402183 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.425904 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:46:01.262385814 +0000 UTC Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.505239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.505564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.505633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.505700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.505769 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.543025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.543083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.543095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.543117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.543131 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.558546 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:07Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.561479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.561516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.561524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.561540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.561550 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.578653 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:07Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.582683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.582729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.582741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.582759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.582770 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.598495 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:07Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.601748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.601790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.601800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.601818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.601830 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.616344 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:07Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.619963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.620119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.620237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.620349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.620436 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.637226 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:07Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:07 crc kubenswrapper[4865]: E0216 22:47:07.637707 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.641426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.641490 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.641502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.641517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.641528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.745230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.745332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.748636 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.748738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.748752 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.851782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.851874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.851903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.851937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.851960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.954656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.954702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.954712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.954730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:07 crc kubenswrapper[4865]: I0216 22:47:07.954746 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:07Z","lastTransitionTime":"2026-02-16T22:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.058059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.058123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.058140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.058166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.058183 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.160965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.161030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.161047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.161072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.161089 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.263635 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.263713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.263737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.263776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.263799 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.367050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.367118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.367135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.367161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.367182 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.413547 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.413667 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:08 crc kubenswrapper[4865]: E0216 22:47:08.413681 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.413763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.413761 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:08 crc kubenswrapper[4865]: E0216 22:47:08.413893 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:08 crc kubenswrapper[4865]: E0216 22:47:08.414024 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:08 crc kubenswrapper[4865]: E0216 22:47:08.414125 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.426242 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:27:31.356217545 +0000 UTC Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.470432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.470482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.470493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.470511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.470528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.573010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.573085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.573109 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.573141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.573173 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.675094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.675147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.675158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.675173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.675196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.777686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.777725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.777738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.777757 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.777773 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.880411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.880487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.880509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.880537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.880554 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.963823 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/0.log" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.963908 4865 generic.go:334] "Generic (PLEG): container finished" podID="518e6107-6873-4bd2-86a6-e422763483ec" containerID="4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c" exitCode=1 Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.963949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerDied","Data":"4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.964453 4865 scope.go:117] "RemoveContainer" containerID="4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.976456 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:08Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.982912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.982953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.982966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.982981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.982993 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:08Z","lastTransitionTime":"2026-02-16T22:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:08 crc kubenswrapper[4865]: I0216 22:47:08.991196 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:08Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.005742 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.021561 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.035935 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.048244 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.061662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.076794 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.086102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.086194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.086234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.086254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.086267 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.087988 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.102208 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.114666 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.127367 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.146138 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.161362 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.173460 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.185082 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.200941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.200987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.201003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.201021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.201030 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.208915 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:09Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.304010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.304073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.304086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.304104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.304116 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.406331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.406387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.406399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.406421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.406434 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.427097 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:35:22.786543666 +0000 UTC Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.508436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.508507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.508518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.508540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.508552 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.611708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.611786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.611806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.611836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.611856 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.714912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.714951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.714960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.714975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.714985 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.818297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.818327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.818337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.818350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.818361 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.921593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.921672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.921690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.922085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.922449 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:09Z","lastTransitionTime":"2026-02-16T22:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.969917 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/0.log" Feb 16 22:47:09 crc kubenswrapper[4865]: I0216 22:47:09.969986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerStarted","Data":"74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.005339 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.025532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.025565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.025575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.025593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.025610 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.028159 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.045858 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.063074 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.076584 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.091757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.105386 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.124143 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.128320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.128357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.128368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.128386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.128398 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.139747 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.155996 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.170036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.188469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.208247 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.225211 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.230362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.230414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.230427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.230446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.230461 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.244837 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.259654 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.281429 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.332652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.332701 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.332715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.332738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.332754 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.414231 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.414318 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.414329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.414245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:10 crc kubenswrapper[4865]: E0216 22:47:10.414545 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:10 crc kubenswrapper[4865]: E0216 22:47:10.414753 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:10 crc kubenswrapper[4865]: E0216 22:47:10.414830 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:10 crc kubenswrapper[4865]: E0216 22:47:10.414893 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.428039 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:56:02.346267525 +0000 UTC Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.429212 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.434764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.434830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.434846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.434863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.434900 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.444747 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.459650 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.474350 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.485838 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.499185 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.516840 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.530298 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.543827 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.544151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.544172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.544180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.544196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.544206 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.560938 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.588705 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.600642 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.618543 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.635146 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.647178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.647209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.647220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.647237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.647247 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.653185 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.665101 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.677556 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.750067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.750117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.750134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.750158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.750175 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.852761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.852826 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.852848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.852875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.852899 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.955738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.956121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.956259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.956444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:10 crc kubenswrapper[4865]: I0216 22:47:10.956600 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:10Z","lastTransitionTime":"2026-02-16T22:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.059204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.059653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.059796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.059927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.060075 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.162628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.162686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.162703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.162729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.162747 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.264703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.265122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.265269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.265492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.265631 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.368270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.368358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.368379 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.368406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.368422 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.414309 4865 scope.go:117] "RemoveContainer" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.428476 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:50:21.438839519 +0000 UTC Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.470191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.470235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.470244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.470262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.470290 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.573771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.573794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.573890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.573907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.573915 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.675644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.676001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.676149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.676326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.676413 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.779163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.779231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.779241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.779256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.779265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.881634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.881672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.881681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.881696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.881706 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.979352 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/2.log" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.982171 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.982887 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.983614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.983656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.983667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.983678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.983687 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:11Z","lastTransitionTime":"2026-02-16T22:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:11 crc kubenswrapper[4865]: I0216 22:47:11.999589 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:11Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.023129 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.036619 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.054534 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.071524 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.086637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.086670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.086679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.086694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.086704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.088890 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.106915 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.131871 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.146347 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.157231 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.206861 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.208569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.208600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.208610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.208625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.208633 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.226483 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.237888 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.250007 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.261025 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.272156 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.281475 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:12Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.311362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.311403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.311416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.311436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.311453 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.414010 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.414021 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:12 crc kubenswrapper[4865]: E0216 22:47:12.414125 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.414205 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:12 crc kubenswrapper[4865]: E0216 22:47:12.414390 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:12 crc kubenswrapper[4865]: E0216 22:47:12.414489 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.414972 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:12 crc kubenswrapper[4865]: E0216 22:47:12.415364 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.415457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.415797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.415998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.416198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.416399 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.429048 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:03:47.276827482 +0000 UTC Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.519029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.519101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.519119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.519145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.519162 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.621904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.622612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.622776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.622960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.623160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.726077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.726527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.726668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.726815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.726955 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.829528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.829583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.829601 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.829624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.829641 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.932580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.932650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.932667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.932694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.932711 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:12Z","lastTransitionTime":"2026-02-16T22:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.988068 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.988654 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/2.log" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.991774 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" exitCode=1 Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.991808 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.991845 4865 scope.go:117] "RemoveContainer" containerID="91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1" Feb 16 22:47:12 crc kubenswrapper[4865]: I0216 22:47:12.993032 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:47:12 crc kubenswrapper[4865]: E0216 22:47:12.993506 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.008807 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.023258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.033551 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.034991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.035068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.035078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.035111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.035122 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.047747 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.061665 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.078892 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.090123 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.111368 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91a79105df66a02db4d12994894da106dc07acf1fd46d19a8bd6506533d047c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:46:48Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:46:48.478194 6528 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:46:48.478230 6528 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:46:48.478257 6528 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:46:48.478372 6528 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:46:48.478404 6528 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:46:48.478405 6528 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:46:48.478434 6528 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 22:46:48.478457 6528 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:46:48.478467 6528 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:46:48.478500 6528 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:46:48.478537 6528 factory.go:656] Stopping watch factory\\\\nI0216 22:46:48.478556 6528 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:46:48.478584 6528 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:46:48.478599 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 22:46:48.478625 6528 handler.go:208] Removed *v1.Node event ha\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:47:12.285919 6904 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:47:12.285957 6904 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:47:12.285976 6904 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:47:12.286002 6904 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:47:12.286006 6904 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:47:12.286039 6904 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:47:12.286050 6904 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:47:12.286058 6904 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:47:12.286048 6904 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:47:12.286081 6904 factory.go:656] Stopping watch factory\\\\nI0216 22:47:12.286112 6904 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:47:12.286147 6904 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:47:12.286106 6904 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:47:12.286077 6904 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:47:12.286096 6904 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.130041 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.139120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.139537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.139940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.140642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.141034 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.146136 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.159734 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.173593 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.185209 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.197385 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.215841 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.232461 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245388 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.245503 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.349437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.349501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.349521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.349548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.349568 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.429438 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:15:20.073875422 +0000 UTC Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.453442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.453501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.453511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.453531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.453549 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.556535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.556588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.556599 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.556618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.556628 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.659494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.659544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.659559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.659579 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.659591 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.762504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.762609 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.762636 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.762673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.762695 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.865408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.865468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.865485 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.865517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.865536 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.968087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.968133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.968146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.968162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.968174 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:13Z","lastTransitionTime":"2026-02-16T22:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:13 crc kubenswrapper[4865]: I0216 22:47:13.997220 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.000503 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:47:14 crc kubenswrapper[4865]: E0216 22:47:14.000658 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.012665 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.023344 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.036808 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.050039 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.064881 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.071136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.071179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.071188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.071217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.071228 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.078984 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.093186 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.109259 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.122619 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.140777 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.153723 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173354 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.173719 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.192335 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:47:12.285919 6904 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:47:12.285957 6904 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:47:12.285976 6904 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:47:12.286002 6904 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:47:12.286006 6904 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:47:12.286039 6904 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:47:12.286050 6904 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:47:12.286058 6904 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:47:12.286048 6904 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:47:12.286081 6904 factory.go:656] Stopping watch factory\\\\nI0216 22:47:12.286112 6904 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:47:12.286147 6904 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:47:12.286106 6904 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:47:12.286077 6904 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:47:12.286096 6904 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.205578 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.218147 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.231120 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.244416 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:14Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.275608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.275647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.275656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.275674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.275684 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.377864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.377904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.377929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.377949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.377960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.414216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.414421 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.414524 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:14 crc kubenswrapper[4865]: E0216 22:47:14.414516 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.414737 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:14 crc kubenswrapper[4865]: E0216 22:47:14.414727 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:14 crc kubenswrapper[4865]: E0216 22:47:14.415007 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:14 crc kubenswrapper[4865]: E0216 22:47:14.415095 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.430584 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:15:52.5315928 +0000 UTC Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.480393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.480435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.480452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.480472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.480489 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.584502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.584556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.584579 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.584611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.584635 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.687786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.687864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.687890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.688006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.688034 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.790816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.790876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.790895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.790921 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.790939 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.894660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.894693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.894702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.894718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.894730 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.998511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.998573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.998590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.998617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:14 crc kubenswrapper[4865]: I0216 22:47:14.998636 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:14Z","lastTransitionTime":"2026-02-16T22:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.101692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.101738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.101755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.101779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.101796 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.205068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.205115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.205132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.205157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.205174 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.307842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.307896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.307916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.307940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.307957 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.411192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.411372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.411458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.411551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.411580 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.431214 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:45:02.294928613 +0000 UTC Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.515637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.515724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.515747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.515774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.515793 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.619477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.619561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.619580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.619612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.619630 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.723186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.723245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.723264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.723317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.723340 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.826667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.826773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.826790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.826816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.826833 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.930177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.930250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.930269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.930326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:15 crc kubenswrapper[4865]: I0216 22:47:15.930347 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:15Z","lastTransitionTime":"2026-02-16T22:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.034270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.034370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.034390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.034417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.034434 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.138158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.138213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.138232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.138260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.138314 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.241306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.241384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.241414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.241448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.241474 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.344713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.344800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.344827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.344859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.344888 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.413968 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.414041 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.414041 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:16 crc kubenswrapper[4865]: E0216 22:47:16.414202 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.414272 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:16 crc kubenswrapper[4865]: E0216 22:47:16.414364 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:16 crc kubenswrapper[4865]: E0216 22:47:16.414557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:16 crc kubenswrapper[4865]: E0216 22:47:16.414732 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.431747 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:23:21.628695343 +0000 UTC Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.447336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.447408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.447428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.447453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.447472 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.551749 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.551807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.551825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.551849 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.551870 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.655965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.656051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.656092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.656119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.656135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.760370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.760440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.760458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.760485 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.760510 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.864349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.864419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.864436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.864462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.864480 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.967383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.967454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.967477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.967506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:16 crc kubenswrapper[4865]: I0216 22:47:16.967530 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:16Z","lastTransitionTime":"2026-02-16T22:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.070800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.070904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.070928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.070961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.070984 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.174794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.174878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.174897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.174925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.174955 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.277714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.277775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.277797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.277826 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.277848 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.381045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.381118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.381142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.381173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.381195 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.432954 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:00:17.190668422 +0000 UTC Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.484628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.484684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.484698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.484716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.484727 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.588805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.588913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.588936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.589005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.589030 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.694075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.694133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.694157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.694191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.694217 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.740323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.740401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.740425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.740461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.740489 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.764946 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:17Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.769568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.769606 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.769631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.769657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.769674 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.784608 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:17Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.790042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.790136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.790152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.790172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.790185 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.807219 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:17Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.813108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.813185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.813333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.813367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.813409 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.828746 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:17Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.833174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.833219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.833233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.833252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.833266 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.848943 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:17Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:17 crc kubenswrapper[4865]: E0216 22:47:17.849076 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.851554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.851619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.851642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.851672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.851725 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.954459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.954959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.954976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.955004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:17 crc kubenswrapper[4865]: I0216 22:47:17.955022 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:17Z","lastTransitionTime":"2026-02-16T22:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.058007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.058041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.058049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.058061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.058070 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.161338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.161431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.161445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.161467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.161802 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.265173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.265240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.265258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.265320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.265344 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.368982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.369051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.369069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.369158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.369191 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.414667 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.414756 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.414859 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.414667 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:18 crc kubenswrapper[4865]: E0216 22:47:18.414935 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:18 crc kubenswrapper[4865]: E0216 22:47:18.415150 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:18 crc kubenswrapper[4865]: E0216 22:47:18.415318 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:18 crc kubenswrapper[4865]: E0216 22:47:18.415480 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.434049 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:51:21.289532527 +0000 UTC Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.471994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.472046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.472057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.472087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.472101 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.575335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.575440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.575462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.575490 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.575509 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.678702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.678742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.678753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.678771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.678781 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.781266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.781398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.781422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.781455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.781476 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.884239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.884326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.884344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.884372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.884391 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.988314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.988361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.988379 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.988405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:18 crc kubenswrapper[4865]: I0216 22:47:18.988423 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:18Z","lastTransitionTime":"2026-02-16T22:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.090930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.090961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.090971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.090985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.090995 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.193618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.193680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.193697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.193723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.193740 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.296461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.296526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.296548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.296581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.296601 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.399976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.400033 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.400078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.400098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.400110 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.435149 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:41:05.841090896 +0000 UTC Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.502839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.502888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.502939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.502965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.502983 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.606631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.606675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.606689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.606714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.606730 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.709156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.709223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.709246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.709305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.709329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.811830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.811865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.811877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.811894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.811905 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.914669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.914778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.914802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.914833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:19 crc kubenswrapper[4865]: I0216 22:47:19.914852 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:19Z","lastTransitionTime":"2026-02-16T22:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.017868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.017957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.017985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.018014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.018036 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.122216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.122302 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.122320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.122352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.122370 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.225086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.225147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.225165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.225197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.225214 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.328957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.329030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.329048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.329075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.329094 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.413794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.414545 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:20 crc kubenswrapper[4865]: E0216 22:47:20.414869 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.414992 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.415040 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:20 crc kubenswrapper[4865]: E0216 22:47:20.415247 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:20 crc kubenswrapper[4865]: E0216 22:47:20.416728 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:20 crc kubenswrapper[4865]: E0216 22:47:20.416881 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.434923 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.435256 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.435371 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:40:09.445732289 +0000 UTC Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.436232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.436805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.437401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.437786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.437946 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.461169 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.481118 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.503436 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.524722 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.541185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.541257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.541314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.541352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.541377 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.546274 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.561876 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.592238 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:47:12.285919 6904 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:47:12.285957 6904 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:47:12.285976 6904 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:47:12.286002 6904 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:47:12.286006 6904 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:47:12.286039 6904 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:47:12.286050 6904 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:47:12.286058 6904 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:47:12.286048 6904 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:47:12.286081 6904 factory.go:656] Stopping watch factory\\\\nI0216 22:47:12.286112 6904 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:47:12.286147 6904 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:47:12.286106 6904 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:47:12.286077 6904 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:47:12.286096 6904 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.612897 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.630714 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.649324 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.649366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.649385 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.649407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.649420 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.651507 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.669733 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.687600 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.702499 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.720224 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.739906 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.752246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.752291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.752305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.752321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.752331 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.756476 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:20Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.856473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.856542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.856560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.856589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.856611 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.959688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.959746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.959765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.959793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:20 crc kubenswrapper[4865]: I0216 22:47:20.959813 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:20Z","lastTransitionTime":"2026-02-16T22:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.062410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.062466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.062483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.062507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.062524 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.165976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.166046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.166070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.166101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.166155 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.269364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.269436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.269454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.269481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.269499 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.372878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.372951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.372967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.372995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.373017 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.435903 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:19:12.921801783 +0000 UTC Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.476347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.476412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.476430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.476457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.476479 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.579984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.580046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.580065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.580093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.580111 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.683187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.683243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.683259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.683321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.683360 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.786499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.786553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.786572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.786597 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.786620 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.889747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.889830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.889850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.889879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.889900 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.992641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.992715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.992738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.992769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:21 crc kubenswrapper[4865]: I0216 22:47:21.992792 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:21Z","lastTransitionTime":"2026-02-16T22:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.096011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.096097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.096121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.096150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.096170 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.199205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.199317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.199347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.199380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.199402 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.303105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.303170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.303188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.303215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.303232 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.406851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.406927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.406951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.406982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.407005 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.414251 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.414273 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:22 crc kubenswrapper[4865]: E0216 22:47:22.414488 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.414537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.414504 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:22 crc kubenswrapper[4865]: E0216 22:47:22.414550 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:22 crc kubenswrapper[4865]: E0216 22:47:22.414760 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:22 crc kubenswrapper[4865]: E0216 22:47:22.414886 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.437144 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:25:12.289057364 +0000 UTC Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.512183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.512246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.512267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.512336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.512358 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.616078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.616141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.616159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.616186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.616216 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.721140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.721208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.721227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.721255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.721300 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.824846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.824907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.824923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.824947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.824965 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.927299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.927369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.927390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.927412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:22 crc kubenswrapper[4865]: I0216 22:47:22.927424 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:22Z","lastTransitionTime":"2026-02-16T22:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.030891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.030965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.030976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.031017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.031035 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.133726 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.133811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.133836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.133869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.133894 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.236360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.236403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.236416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.236434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.236446 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.339582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.339631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.339640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.339657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.339667 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.437802 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:45:48.660835038 +0000 UTC Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.442565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.442633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.442654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.442681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.442699 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.545797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.545857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.545877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.545903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.545922 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.648925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.649012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.649029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.649057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.649081 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.751898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.751973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.751997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.752030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.752053 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.855244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.855336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.855354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.855381 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.855397 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.958092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.958135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.958148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.958166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:23 crc kubenswrapper[4865]: I0216 22:47:23.958177 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:23Z","lastTransitionTime":"2026-02-16T22:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.061343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.061700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.061819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.061915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.061980 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.164023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.164066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.164077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.164093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.164105 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.266325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.266801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.266820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.266844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.266858 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.282832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.282974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.283005 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.283044 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.283011839 +0000 UTC m=+148.606718820 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.283097 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.283161 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.283144313 +0000 UTC m=+148.606851374 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.283185 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.283249 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.283236286 +0000 UTC m=+148.606943367 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.370136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.370175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.370184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.370199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.370210 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.383899 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.383949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384082 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384082 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384098 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384111 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384112 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384122 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384165 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.384152002 +0000 UTC m=+148.707858963 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.384178 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.384173623 +0000 UTC m=+148.707880584 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.414202 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.414236 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.414202 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.414402 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.414367 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.414559 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.414696 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:24 crc kubenswrapper[4865]: E0216 22:47:24.414826 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.438536 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:16:37.000236814 +0000 UTC Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.473097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.473171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.473183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.473200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.473213 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.576928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.576983 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.576995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.577015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.577029 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.680246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.680353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.680382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.680445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.680503 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.783665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.783746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.783769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.783795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.783813 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.887078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.887131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.887143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.887166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.887181 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.990813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.990900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.990924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.990958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:24 crc kubenswrapper[4865]: I0216 22:47:24.990983 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:24Z","lastTransitionTime":"2026-02-16T22:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.093338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.093409 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.093433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.093472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.093499 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.196306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.196370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.196387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.196413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.196430 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.300674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.300821 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.300845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.300869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.300889 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.403577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.403616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.403625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.403642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.403653 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.438775 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:37:55.413068676 +0000 UTC Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.507883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.507942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.507958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.507984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.508001 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.611045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.611092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.611106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.611123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.611135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.715310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.715384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.715401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.715431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.715451 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.818375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.818458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.818482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.818512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.818534 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.922113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.922197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.922216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.922247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:25 crc kubenswrapper[4865]: I0216 22:47:25.922270 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:25Z","lastTransitionTime":"2026-02-16T22:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.026640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.026723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.026747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.026779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.026804 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.130684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.130763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.130786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.130818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.130840 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.234598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.234669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.234687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.234712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.234731 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.337712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.337779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.337797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.337829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.337849 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.414030 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:26 crc kubenswrapper[4865]: E0216 22:47:26.415171 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.414116 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:26 crc kubenswrapper[4865]: E0216 22:47:26.415330 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.414191 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.414100 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.415703 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:47:26 crc kubenswrapper[4865]: E0216 22:47:26.415941 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:26 crc kubenswrapper[4865]: E0216 22:47:26.415969 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:26 crc kubenswrapper[4865]: E0216 22:47:26.416067 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.439007 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:49:36.914669913 +0000 UTC Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.441045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.441099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.441121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.441151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.441173 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.544659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.544736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.544760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.544791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.544812 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.648073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.648141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.648161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.648186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.648203 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.751856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.751959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.751982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.752047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.752069 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.855544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.855605 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.855621 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.855647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.855665 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.959928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.960012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.960047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.960084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:26 crc kubenswrapper[4865]: I0216 22:47:26.960107 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:26Z","lastTransitionTime":"2026-02-16T22:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.062868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.062930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.062947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.062979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.063002 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.177832 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.177949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.177970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.177997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.178016 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.281356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.281445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.281462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.281486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.281502 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.385570 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.385632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.385648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.385674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.385692 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.439579 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:51:22.882799542 +0000 UTC Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.488598 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.488648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.488667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.488692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.488710 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.592477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.592550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.592567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.592595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.592613 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.696423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.696498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.696515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.696545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.696569 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.800953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.801042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.801066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.801097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.801121 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.904674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.904783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.904811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.904845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:27 crc kubenswrapper[4865]: I0216 22:47:27.904870 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:27Z","lastTransitionTime":"2026-02-16T22:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.009018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.009142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.009230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.009260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.009310 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.112356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.112417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.112436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.112469 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.112488 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.216214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.216326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.216347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.216374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.216392 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.226911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.226977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.226994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.227020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.227037 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.248470 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.254061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.254118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.254164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.254200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.254223 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.277123 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.282500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.282554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.282574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.282604 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.282623 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.302446 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.308364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.308441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.308465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.308497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.308520 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.329839 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.335922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.335989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.336009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.336037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.336061 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.353434 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:28Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.353652 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.355297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.355366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.355383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.355436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.355453 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.414368 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.414479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.414497 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.414577 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.414688 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.414698 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.414835 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:28 crc kubenswrapper[4865]: E0216 22:47:28.415012 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.440083 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:13:43.41609222 +0000 UTC Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.458719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.458762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.458779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.458804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.458822 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.562209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.562271 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.562308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.562332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.562346 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.665672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.665752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.665770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.665800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.665821 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.768779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.768849 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.768867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.768898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.768921 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.871924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.871991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.872014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.872046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.872129 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.974964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.975030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.975047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.975075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:28 crc kubenswrapper[4865]: I0216 22:47:28.975095 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:28Z","lastTransitionTime":"2026-02-16T22:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.078461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.078538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.078561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.078592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.078610 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.181894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.181953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.181972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.181996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.182014 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.285628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.285700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.285720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.285741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.285889 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.390439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.390553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.390576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.390640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.390663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.441095 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:08:24.421903723 +0000 UTC Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.493451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.493566 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.493590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.493625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.493649 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.596872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.596942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.596966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.596994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.597014 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.700779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.700843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.700860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.700886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.700903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.804640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.804708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.804728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.804755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.804775 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.908478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.908551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.908569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.908597 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:29 crc kubenswrapper[4865]: I0216 22:47:29.908617 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:29Z","lastTransitionTime":"2026-02-16T22:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.019127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.019202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.019226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.019258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.019285 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.122755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.122813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.122834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.122862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.122885 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.225540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.225605 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.225630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.225661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.225685 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.328886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.328969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.328996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.329022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.329039 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.414398 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.414406 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.414410 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.414439 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:30 crc kubenswrapper[4865]: E0216 22:47:30.414711 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:30 crc kubenswrapper[4865]: E0216 22:47:30.414835 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:30 crc kubenswrapper[4865]: E0216 22:47:30.414939 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:30 crc kubenswrapper[4865]: E0216 22:47:30.415153 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.433123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.433187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.433207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.433237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.433261 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.436078 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.441880 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:29:43.713898482 +0000 UTC Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.452354 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.472435 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.487937 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.521136 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:47:12.285919 6904 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:47:12.285957 6904 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:47:12.285976 6904 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:47:12.286002 6904 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:47:12.286006 6904 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:47:12.286039 6904 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:47:12.286050 6904 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:47:12.286058 6904 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:47:12.286048 6904 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:47:12.286081 6904 factory.go:656] Stopping watch factory\\\\nI0216 22:47:12.286112 6904 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:47:12.286147 6904 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:47:12.286106 6904 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:47:12.286077 6904 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:47:12.286096 6904 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.536920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.536991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.537009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.537038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.537096 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.538637 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e20b2-c6a8-4490-bdd1-b03a9d14eee8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16206ebdb465c684bdb96390345673597700b4166d0a59289dd67034aea952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.560982 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.579457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.598412 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.618971 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.637904 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.643669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.643760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.643777 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.643804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.643826 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.656404 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.678327 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.697684 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.717538 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.737881 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.746503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.746564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.746582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.746613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.746633 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.758358 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.774417 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:30Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.849904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.849959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.849975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.850000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.850015 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.952995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.953035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.953052 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.953074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:30 crc kubenswrapper[4865]: I0216 22:47:30.953091 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:30Z","lastTransitionTime":"2026-02-16T22:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.055578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.055641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.055660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.055693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.055712 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.159154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.159207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.159223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.159246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.159265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.262762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.262825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.262850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.262881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.262903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.365196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.365242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.365257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.365321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.365344 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.442353 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:58:20.631092922 +0000 UTC Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.468515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.468577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.468600 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.468628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.468652 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.571454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.571520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.571542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.571572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.571592 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.674956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.675015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.675034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.675093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.675112 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.777839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.777918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.777973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.778004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.778025 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.881386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.881433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.881449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.881472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.881492 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.984721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.984790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.984812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.984839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:31 crc kubenswrapper[4865]: I0216 22:47:31.984857 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:31Z","lastTransitionTime":"2026-02-16T22:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.088541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.088619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.088638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.088667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.088687 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.191393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.191483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.191514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.191545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.191569 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.295121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.295249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.295271 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.295322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.295347 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.398182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.398399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.398423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.398451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.398471 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.413914 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.414046 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:32 crc kubenswrapper[4865]: E0216 22:47:32.414129 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.414158 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.414146 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:32 crc kubenswrapper[4865]: E0216 22:47:32.414372 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:32 crc kubenswrapper[4865]: E0216 22:47:32.414533 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:32 crc kubenswrapper[4865]: E0216 22:47:32.414661 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.443126 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:18:18.174780946 +0000 UTC Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.501347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.501426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.501448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.501483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.501507 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.605666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.605754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.605865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.605956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.605988 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.709666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.709740 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.709759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.709787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.709808 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.812790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.812843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.812861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.812884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.812902 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.915284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.915358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.915369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.915388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:32 crc kubenswrapper[4865]: I0216 22:47:32.915401 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:32Z","lastTransitionTime":"2026-02-16T22:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.018474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.018521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.018532 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.018552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.018565 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.121720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.121794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.121812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.121838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.121861 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.224831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.224912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.224941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.224969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.224993 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.328809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.328873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.328891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.328916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.328937 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.432887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.432979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.433007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.433040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.433063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.444129 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:23:52.353198955 +0000 UTC Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.537065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.537160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.537187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.537233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.537253 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.639872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.639949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.640000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.640029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.640047 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.752792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.752864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.752883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.752911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.752930 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.856758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.856825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.856843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.856870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.856889 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.959913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.959997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.960014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.960044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:33 crc kubenswrapper[4865]: I0216 22:47:33.960065 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:33Z","lastTransitionTime":"2026-02-16T22:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.063543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.063596 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.063612 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.063637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.063653 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.167032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.167157 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.167176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.167204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.167223 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.270644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.270717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.270735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.270764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.270782 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.373886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.374000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.374018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.374042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.374060 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.413728 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.413802 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.413807 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.413728 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:34 crc kubenswrapper[4865]: E0216 22:47:34.413994 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:34 crc kubenswrapper[4865]: E0216 22:47:34.414113 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:34 crc kubenswrapper[4865]: E0216 22:47:34.414235 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:34 crc kubenswrapper[4865]: E0216 22:47:34.414382 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.444428 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:30:52.74735555 +0000 UTC Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.477114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.477175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.477197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.477223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.477242 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.580322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.580387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.580410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.580436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.580456 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.683561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.683629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.683647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.683679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.683696 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.786794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.786847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.786864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.786888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.786906 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.890578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.890689 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.890713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.890743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.890766 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.994902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.994965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.994984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.995011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:34 crc kubenswrapper[4865]: I0216 22:47:34.995028 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:34Z","lastTransitionTime":"2026-02-16T22:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.097951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.098019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.098038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.098067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.098086 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.202489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.202560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.202580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.202608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.202626 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.305438 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.305504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.305521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.305555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.305573 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.409326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.409405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.409433 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.409463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.409720 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.434919 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.445115 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:07:28.305429959 +0000 UTC Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.512653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.512763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.512816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.512847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.512866 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.615875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.615945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.615958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.615978 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.616009 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.718833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.718914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.718942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.718971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.718990 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.822177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.822241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.822258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.822313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.822336 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.925518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.925575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.925592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.925620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:35 crc kubenswrapper[4865]: I0216 22:47:35.925639 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:35Z","lastTransitionTime":"2026-02-16T22:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.028090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.028150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.028168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.028194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.028213 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.131116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.131164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.131174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.131195 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.131209 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.234215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.234622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.234759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.234941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.235082 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.338465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.338537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.338555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.338580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.338597 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.413834 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.413900 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.413928 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:36 crc kubenswrapper[4865]: E0216 22:47:36.413972 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:36 crc kubenswrapper[4865]: E0216 22:47:36.414053 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.414106 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:36 crc kubenswrapper[4865]: E0216 22:47:36.414149 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:36 crc kubenswrapper[4865]: E0216 22:47:36.414211 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.441050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.441106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.441117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.441137 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.441148 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.445525 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:22:12.922561992 +0000 UTC Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.544091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.544184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.544205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.544235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.544254 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.648085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.648170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.648257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.648368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.648435 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.751830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.751901 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.751920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.751948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.751968 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.855377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.855435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.855452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.855480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.855498 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.959607 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.959656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.959665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.959682 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:36 crc kubenswrapper[4865]: I0216 22:47:36.959692 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:36Z","lastTransitionTime":"2026-02-16T22:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.063427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.063486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.063499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.063520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.063533 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.166888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.166946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.166962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.166985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.167001 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.269337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.269418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.269444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.269474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.269493 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.372057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.372102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.372113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.372128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.372141 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.446048 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:22:25.946344721 +0000 UTC Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.475471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.475506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.475514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.475529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.475541 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.578316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.578386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.578413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.578510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.578542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.681629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.681688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.681705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.681730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.681750 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.785010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.785088 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.785110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.785146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.785169 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.887987 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.888036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.888047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.888067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.888082 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.990646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.990711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.990728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.990761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:37 crc kubenswrapper[4865]: I0216 22:47:37.990782 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:37Z","lastTransitionTime":"2026-02-16T22:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.093515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.093581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.093601 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.093629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.093646 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.196545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.196645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.196665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.196693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.196712 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.300320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.300382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.300399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.300424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.300443 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.403317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.403362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.403374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.403413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.403426 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.413893 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.413942 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.413967 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.414029 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.414100 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.414349 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.414385 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.414604 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.415788 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.416045 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.446612 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:45:32.296403294 +0000 UTC Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.476939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.476999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.477016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.477041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.477059 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.497115 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.502933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.502998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.503022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.503049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.503070 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.524210 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.562107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.562151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.562167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.562184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.562194 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.583044 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.588102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.588126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.588136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.588152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.588164 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.602351 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.607860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.607919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.607932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.607946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.607955 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.626113 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8882e2e1-623c-454f-a6ef-195b25a9cb95\\\",\\\"systemUUID\\\":\\\"e2a2196d-095f-444c-b467-b5377cee59c0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:38Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.626242 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.628022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.628074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.628095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.628124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.628144 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.731424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.731472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.731487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.731506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.731518 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.763367 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.763585 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:47:38 crc kubenswrapper[4865]: E0216 22:47:38.763698 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs podName:0e0ca52e-7cb6-4d90-8d0b-4124cce13447 nodeName:}" failed. No retries permitted until 2026-02-16 22:48:42.763675548 +0000 UTC m=+163.087382589 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs") pod "network-metrics-daemon-ggbcr" (UID: "0e0ca52e-7cb6-4d90-8d0b-4124cce13447") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.834160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.834219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.834236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.834260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.834300 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.936634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.936688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.936697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.936712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:38 crc kubenswrapper[4865]: I0216 22:47:38.936721 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:38Z","lastTransitionTime":"2026-02-16T22:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.039452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.039548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.039567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.039591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.039610 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.142830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.142902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.142917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.142932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.142942 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.245487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.245581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.245608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.245677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.245700 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.348965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.349011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.349021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.349041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.349053 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.447018 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:45:25.972854984 +0000 UTC Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.452861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.452887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.452895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.452910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.452919 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.555261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.555338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.555351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.555369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.555380 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.658796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.658835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.658848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.658866 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.658879 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.761560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.761610 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.761623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.761641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.761652 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.864211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.864253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.864301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.864317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.864328 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.966705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.966767 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.966785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.966809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:39 crc kubenswrapper[4865]: I0216 22:47:39.966826 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:39Z","lastTransitionTime":"2026-02-16T22:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.069825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.069880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.069898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.069925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.069942 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.172569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.172661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.172683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.172720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.172738 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.275922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.275958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.275967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.275981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.275993 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.378019 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.378084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.378104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.378134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.378154 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.414483 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.414537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.414545 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.414490 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:40 crc kubenswrapper[4865]: E0216 22:47:40.414689 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:40 crc kubenswrapper[4865]: E0216 22:47:40.414794 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:40 crc kubenswrapper[4865]: E0216 22:47:40.414929 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:40 crc kubenswrapper[4865]: E0216 22:47:40.415090 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.434941 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5ed4f9f-4e35-4b99-a523-d9cd6f90b1fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3c8e1511bfaab96c1e33d3e1d1dc227344cb92c64170dc40eef40341f9282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f74aeafd4eb42c1fa802e03dd3df19fc25c3e5546e453008c167955d336ca2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca218b91fbbe3033c021358958b45efb93f3c1875a2d18446d682ad80ff55122\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.448164 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:50:05.787887547 +0000 UTC Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.453766 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5d9c6929ae494c038ff488c2fee47aad62d31a677069fab576505bc1c56c3ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd10e8581ae8a5ec147d964363cf296c1ea48987e27edfaaf23a4f2cc3b8806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.466669 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p2bwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fc94743-41ce-4311-b0a7-d24aec69e9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afb2eef5907b2fb1f43b59f3b8afb68253d49f3a83f0cdc5cd10df0e45962156\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mntqf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p2bwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.481006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.481144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.481172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.481201 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.481231 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.483603 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67de42bb03576bab461ec24db86f0fb38229d2b303d174c1892468bcfd20a1ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.494862 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ee041-5763-4a28-9d12-7ba21bbb9dbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5377362d269a2fbd24e93ddf0ce2803daadff72dd5b6926424e2d818448eb907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nszz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7sl6f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.519864 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa218e65-d6b1-42a7-8eb0-a0e15f54f7b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0091174cc072ed0bc0bfa1a1501b4fa2b801585525dae159b591213c74ddbf5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://240fca5a6169674d04d04924dbb1859d9cfad4ebaf3bcb15b685d1dcb52d6879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba2a6ff3b821f9bd20c15af6000c4b04b7382cca4bddba49c3cd3a95bf18661\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e050e591eac20bf9ccc57f3c4eac82e3a420e9e850ad4057ce69c236a51dd5ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1df6fbcf30225717389e253f415a98dcb258103cf591c6802daeb657018386ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e249b0c4127dfa42d0aeb4663cec7ecc73cec31a922fc17a0c772cce72c017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de26abfeb26481d43117471fa3512ef69ab782712371782639c24c8232786478\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9mg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x4rgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.549061 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:12Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 22:47:12.285919 6904 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 22:47:12.285957 6904 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 22:47:12.285976 6904 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 22:47:12.286002 6904 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 22:47:12.286006 6904 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 22:47:12.286039 6904 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 22:47:12.286050 6904 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 22:47:12.286058 6904 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 22:47:12.286048 6904 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 22:47:12.286081 6904 factory.go:656] Stopping watch factory\\\\nI0216 22:47:12.286112 6904 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 22:47:12.286147 6904 ovnkube.go:599] Stopped ovnkube\\\\nI0216 22:47:12.286106 6904 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 22:47:12.286077 6904 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 22:47:12.286096 6904 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:47:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxp7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-v9gjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.561538 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5e20b2-c6a8-4490-bdd1-b03a9d14eee8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16206ebdb465c684bdb96390345673597700b4166d0a59289dd67034aea952b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861ebc96a126990d75e7c0f5da5034e3581ba979c0cda0b0ec809303e1de34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.580827 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 22:46:20.218565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 22:46:20.218864 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 22:46:20.220590 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3068580495/tls.crt::/tmp/serving-cert-3068580495/tls.key\\\\\\\"\\\\nI0216 22:46:20.677081 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 22:46:20.691270 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 22:46:20.691373 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 22:46:20.698439 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 22:46:20.698472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 22:46:20.713488 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 22:46:20.713510 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 22:46:20.713535 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 22:46:20.713547 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 22:46:20.713551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 22:46:20.713555 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 22:46:20.713558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 22:46:20.715166 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:14Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.584082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.584151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.584175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.584204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.584226 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.594364 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17957dad-fc4e-47f6-9008-142bf7ab12b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://416e6ac61265959d9c4895987503c390040a6d4ecdf12c0530ded6fc721bfaa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be54b2a9b6551641dce2e8e886f04d8a9cf76399a46d6ba0533bfcf766453631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1757b170055a16e65276f524bda421fa929e62866ebe19163d6558e551f6d7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3488da6d63933ae020077a4fb277b53e4e85bd75cac43d6c64bb962472a832f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.612315 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.625422 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rhf5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3dce5a-cc80-4a4a-afa1-1ce88585fe7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af23c5919f064a067e1f6ffd9bcfc669c95f789fecc58b6de768c838fff8eeb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42pjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rhf5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.641317 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d979a250-e586-4f45-b78e-ce99dbdbe9a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c13351b5aadf20e7142d197d9ae8333bff29ca6c3f7668c93f0dd510b52eb51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df29cc453707021c661a9f53e979fcab4c05a00e4dcef9f85a727168f1e78ed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zvzm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-shf2n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.655188 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkt54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ggbcr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.685205 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"914f3d95-17ef-40c2-bf80-448e53a3b696\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4b2349f8a4b72e9e725382e6ce2cf9b93cb93371720ba5414fcd73b72341de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa4031a038b9191a1af1b2cc5e4a51bcab1a6e1f70798430bbe376fee33c978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b495dc2ac9333d830c62028fb949d15bb449bcf006b16e0e5e78428ba57b45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b068f989f925e48014564c0dfaba6f0601005c96870ce62e4bae58443f89f8a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad68fe303b9a7cd9a173de2ccc816d892fa8c73978c130e6a1cd3d20aa22e528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6793394dcea221e7cccfb13c16150d8d84e23f7b50e4e7cefa2fe52e867aca0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6793394dcea221e7cccfb13c16150d8d84e23f7b50e4e7cefa2fe52e867aca0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98aaf5332e1705f701b10355838c5aa257308ddbddd3cce0ded8d8f9dc39f2f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98aaf5332e1705f701b10355838c5aa257308ddbddd3cce0ded8d8f9dc39f2f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1b430fa6cb64d35dc332dcf03f7e1318262168d70dd91222558433934c2b509c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b430fa6cb64d35dc332dcf03f7e1318262168d70dd91222558433934c2b509c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:46:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.687927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.688049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.688063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.688084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.688096 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.701695 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://580bc5f5c63d040c31c80d1c34c70711c035b80d8bd34546ba77e2477a938304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.718376 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.736081 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.753495 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tqmsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518e6107-6873-4bd2-86a6-e422763483ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T22:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T22:47:08Z\\\",\\\"message\\\":\\\"2026-02-16T22:46:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad\\\\n2026-02-16T22:46:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_69f8d7f9-9d98-4d01-a8d5-29073da708ad to /host/opt/cni/bin/\\\\n2026-02-16T22:46:23Z [verbose] multus-daemon started\\\\n2026-02-16T22:46:23Z [verbose] Readiness Indicator file check\\\\n2026-02-16T22:47:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T22:46:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T22:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwt79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T22:46:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tqmsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T22:47:40Z is after 2025-08-24T17:21:41Z" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.791022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.791076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.791115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.791140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.791157 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.894233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.894397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.894427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.894459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.894477 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.997575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.997618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.997630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.997648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:40 crc kubenswrapper[4865]: I0216 22:47:40.997658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:40Z","lastTransitionTime":"2026-02-16T22:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.099890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.099962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.099981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.100006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.100024 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.202082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.202129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.202142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.202160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.202171 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.305002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.305061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.305077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.305103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.305119 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.408190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.408230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.408240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.408256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.408266 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.448960 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:37:43.619268792 +0000 UTC Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.511086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.511150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.511169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.511198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.511219 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.614047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.614110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.614126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.614151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.614168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.718036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.718087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.718098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.718118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.718130 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.820507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.820556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.820566 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.820584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.820595 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.929752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.930093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.930265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.930454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:41 crc kubenswrapper[4865]: I0216 22:47:41.930572 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:41Z","lastTransitionTime":"2026-02-16T22:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.033583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.033629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.033639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.033655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.033665 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.136472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.136866 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.137057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.137274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.137555 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.240580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.240630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.240640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.240658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.240668 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.342935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.343211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.343316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.343389 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.343488 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.414326 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.414403 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.414553 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.414609 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:42 crc kubenswrapper[4865]: E0216 22:47:42.414775 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:42 crc kubenswrapper[4865]: E0216 22:47:42.415093 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:42 crc kubenswrapper[4865]: E0216 22:47:42.415227 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:42 crc kubenswrapper[4865]: E0216 22:47:42.415478 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.446040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.446212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.446429 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.446515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.446583 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.449174 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:30:35.253291578 +0000 UTC Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.549613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.550001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.550140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.550336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.550508 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.653476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.653838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.653963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.654096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.654209 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.757990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.758057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.758081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.758111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.758133 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.861338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.861397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.861412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.861459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.861473 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.964458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.964796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.964878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.964969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:42 crc kubenswrapper[4865]: I0216 22:47:42.965047 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:42Z","lastTransitionTime":"2026-02-16T22:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.068781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.068867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.068931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.068970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.068999 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.171823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.171913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.171937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.171973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.171999 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.275160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.275223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.275241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.275267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.275311 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.378732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.378801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.378819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.378845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.378865 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.449913 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:43:14.231192946 +0000 UTC Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.481681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.481752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.481774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.481805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.481828 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.585059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.585114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.585133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.585158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.585179 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.688611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.688681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.688700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.688725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.688743 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.791261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.791359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.791377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.791404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.791423 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.895776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.895858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.895871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.895892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.895906 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.999038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.999123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.999153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.999190 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:43 crc kubenswrapper[4865]: I0216 22:47:43.999210 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:43Z","lastTransitionTime":"2026-02-16T22:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.102479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.102538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.102554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.102578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.102594 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.204931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.205223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.205331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.205510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.205571 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.308566 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.308642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.308667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.308696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.308714 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.411403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.411448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.411459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.411479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.411490 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.413946 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.414022 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:44 crc kubenswrapper[4865]: E0216 22:47:44.414087 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.414022 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:44 crc kubenswrapper[4865]: E0216 22:47:44.414197 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.414248 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:44 crc kubenswrapper[4865]: E0216 22:47:44.414421 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:44 crc kubenswrapper[4865]: E0216 22:47:44.414542 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.450850 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:55:53.287441516 +0000 UTC Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.514575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.514622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.514633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.514650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.514663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.619529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.619619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.619644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.619677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.619707 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.723223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.723326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.723345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.723371 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.723390 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.826555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.826602 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.826620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.826646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.826662 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.929804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.929872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.929892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.929935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:44 crc kubenswrapper[4865]: I0216 22:47:44.929954 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:44Z","lastTransitionTime":"2026-02-16T22:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.033544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.033608 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.033626 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.033654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.033673 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.137703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.138095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.138226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.138423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.138563 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.241666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.241748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.241775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.241808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.241831 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.344712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.345081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.345227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.345437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.345585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.449126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.449192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.449209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.449238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.449256 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.451309 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:09:30.534455292 +0000 UTC Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.552854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.553357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.553506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.553738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.553884 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.657444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.657533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.657558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.657589 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.657615 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.760847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.760923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.760944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.760972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.760990 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.864587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.864656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.864680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.864713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.864732 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.968332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.968414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.968435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.968468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:45 crc kubenswrapper[4865]: I0216 22:47:45.968490 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:45Z","lastTransitionTime":"2026-02-16T22:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.072014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.072084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.072104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.072132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.072153 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.175270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.175351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.175368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.175392 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.175410 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.278590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.278653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.278670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.278697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.278716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.382547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.382633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.382657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.382684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.382708 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.414683 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.414739 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.414798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.414683 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:46 crc kubenswrapper[4865]: E0216 22:47:46.415003 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:46 crc kubenswrapper[4865]: E0216 22:47:46.415177 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:46 crc kubenswrapper[4865]: E0216 22:47:46.415353 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:46 crc kubenswrapper[4865]: E0216 22:47:46.415584 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.452354 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:58:38.518892752 +0000 UTC Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.486049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.486103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.486117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.486137 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.486151 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.588677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.588752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.588774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.588805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.588827 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.692695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.692763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.692783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.692807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.692823 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.795575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.795658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.795674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.795702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.795722 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.899659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.899714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.899726 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.899746 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:46 crc kubenswrapper[4865]: I0216 22:47:46.899759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:46Z","lastTransitionTime":"2026-02-16T22:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.002501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.002571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.002590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.002620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.002641 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.109049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.109179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.109223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.109306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.109342 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.212496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.212567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.212583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.212611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.212634 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.315725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.315765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.315777 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.315795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.315807 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.418696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.418755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.418767 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.418788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.418803 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.452551 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:30:11.523280861 +0000 UTC Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.522051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.522120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.522146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.522179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.522202 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.625824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.625878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.625891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.625911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.625923 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.729247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.729359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.729380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.729404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.729422 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.832225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.832330 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.832357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.832403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.832435 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.935624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.935684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.935713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.935760 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:47 crc kubenswrapper[4865]: I0216 22:47:47.935785 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:47Z","lastTransitionTime":"2026-02-16T22:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.038503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.038549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.038560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.038576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.038585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.141614 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.141704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.141735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.141768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.141792 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.244686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.244752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.244768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.244797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.244815 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.347472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.347520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.347529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.347547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.347556 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.414050 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.414050 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.414089 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.414137 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:48 crc kubenswrapper[4865]: E0216 22:47:48.414273 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:48 crc kubenswrapper[4865]: E0216 22:47:48.414772 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:48 crc kubenswrapper[4865]: E0216 22:47:48.414958 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:48 crc kubenswrapper[4865]: E0216 22:47:48.414615 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.450703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.450769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.450787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.450813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.450831 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.453358 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:07:09.179736604 +0000 UTC Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.554664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.554747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.554771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.554801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.554825 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.658360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.658418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.658435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.658453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.658465 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.761819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.761883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.761896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.761922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.761935 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.864837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.864902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.864920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.864946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.864966 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.882198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.882431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.882528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.882628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.882721 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T22:47:48Z","lastTransitionTime":"2026-02-16T22:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.955456 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww"] Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.956252 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.958877 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.959689 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.960926 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 22:47:48 crc kubenswrapper[4865]: I0216 22:47:48.961252 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.002874 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tqmsq" podStartSLOduration=89.002849089 podStartE2EDuration="1m29.002849089s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:48.98360689 +0000 UTC m=+109.307313921" watchObservedRunningTime="2026-02-16 22:47:49.002849089 +0000 UTC m=+109.326556060" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.003184 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-shf2n" podStartSLOduration=89.003177298 podStartE2EDuration="1m29.003177298s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.002762576 +0000 UTC m=+109.326469587" watchObservedRunningTime="2026-02-16 22:47:49.003177298 +0000 UTC m=+109.326884269" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076155 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34888253-9af6-4fc3-964d-1eddd6341c68-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076226 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076255 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076300 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34888253-9af6-4fc3-964d-1eddd6341c68-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076403 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=14.076380406 podStartE2EDuration="14.076380406s" podCreationTimestamp="2026-02-16 22:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.076129198 +0000 UTC m=+109.399836149" watchObservedRunningTime="2026-02-16 22:47:49.076380406 +0000 UTC m=+109.400087407" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.076547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34888253-9af6-4fc3-964d-1eddd6341c68-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.162026 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.161991607 podStartE2EDuration="1m29.161991607s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.161512903 +0000 UTC m=+109.485219874" watchObservedRunningTime="2026-02-16 22:47:49.161991607 +0000 UTC m=+109.485698618" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177126 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34888253-9af6-4fc3-964d-1eddd6341c68-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177315 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34888253-9af6-4fc3-964d-1eddd6341c68-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177400 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34888253-9af6-4fc3-964d-1eddd6341c68-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177439 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.177497 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/34888253-9af6-4fc3-964d-1eddd6341c68-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.179377 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34888253-9af6-4fc3-964d-1eddd6341c68-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.188192 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34888253-9af6-4fc3-964d-1eddd6341c68-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.192761 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p2bwl" podStartSLOduration=89.192746904 podStartE2EDuration="1m29.192746904s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.191927691 +0000 UTC m=+109.515634702" watchObservedRunningTime="2026-02-16 22:47:49.192746904 +0000 UTC m=+109.516453875" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.201878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34888253-9af6-4fc3-964d-1eddd6341c68-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bcvww\" (UID: \"34888253-9af6-4fc3-964d-1eddd6341c68\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.220160 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podStartSLOduration=89.220139375 podStartE2EDuration="1m29.220139375s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.219287941 +0000 UTC m=+109.542994902" watchObservedRunningTime="2026-02-16 22:47:49.220139375 +0000 UTC m=+109.543846346" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.248638 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x4rgl" podStartSLOduration=89.248603717 podStartE2EDuration="1m29.248603717s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.236489342 +0000 UTC m=+109.560196363" watchObservedRunningTime="2026-02-16 22:47:49.248603717 +0000 UTC m=+109.572310728" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.250370 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rhf5k" podStartSLOduration=89.250355637 podStartE2EDuration="1m29.250355637s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.248856214 +0000 UTC m=+109.572563175" watchObservedRunningTime="2026-02-16 22:47:49.250355637 +0000 UTC m=+109.574062638" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.280336 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.326293 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.326261232 podStartE2EDuration="29.326261232s" podCreationTimestamp="2026-02-16 22:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.298368996 +0000 UTC m=+109.622075977" watchObservedRunningTime="2026-02-16 22:47:49.326261232 +0000 UTC m=+109.649968193" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.345781 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.345747587 podStartE2EDuration="55.345747587s" podCreationTimestamp="2026-02-16 22:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.344817821 +0000 UTC m=+109.668524812" watchObservedRunningTime="2026-02-16 22:47:49.345747587 +0000 UTC m=+109.669454588" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.346195 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.34618369 podStartE2EDuration="1m28.34618369s" podCreationTimestamp="2026-02-16 22:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:49.325692936 +0000 UTC m=+109.649399927" watchObservedRunningTime="2026-02-16 22:47:49.34618369 +0000 UTC m=+109.669890691" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.414793 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:47:49 crc kubenswrapper[4865]: E0216 22:47:49.415030 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-v9gjl_openshift-ovn-kubernetes(2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.453920 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:09:41.491121365 +0000 UTC Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.454006 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 22:47:49 crc kubenswrapper[4865]: I0216 22:47:49.466799 4865 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.143445 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" event={"ID":"34888253-9af6-4fc3-964d-1eddd6341c68","Type":"ContainerStarted","Data":"dbd9971a2483e33b208e415f263495c9cdaed2e93c4993a9203c8b83e5c4e593"} Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.143516 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" event={"ID":"34888253-9af6-4fc3-964d-1eddd6341c68","Type":"ContainerStarted","Data":"f44fe1146e30815c62df5cff50a99b4b9cfb60c24757d6c306b0683a73a3171f"} Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.413882 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.413882 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.414009 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:50 crc kubenswrapper[4865]: I0216 22:47:50.416221 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:50 crc kubenswrapper[4865]: E0216 22:47:50.416348 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:50 crc kubenswrapper[4865]: E0216 22:47:50.416437 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:50 crc kubenswrapper[4865]: E0216 22:47:50.416517 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:50 crc kubenswrapper[4865]: E0216 22:47:50.416146 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:52 crc kubenswrapper[4865]: I0216 22:47:52.414137 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:52 crc kubenswrapper[4865]: I0216 22:47:52.414193 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:52 crc kubenswrapper[4865]: I0216 22:47:52.414137 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:52 crc kubenswrapper[4865]: E0216 22:47:52.414312 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:52 crc kubenswrapper[4865]: E0216 22:47:52.414428 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:52 crc kubenswrapper[4865]: E0216 22:47:52.414519 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:52 crc kubenswrapper[4865]: I0216 22:47:52.414690 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:52 crc kubenswrapper[4865]: E0216 22:47:52.414785 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:54 crc kubenswrapper[4865]: I0216 22:47:54.414045 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:54 crc kubenswrapper[4865]: I0216 22:47:54.414118 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:54 crc kubenswrapper[4865]: I0216 22:47:54.414125 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:54 crc kubenswrapper[4865]: E0216 22:47:54.414403 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:54 crc kubenswrapper[4865]: E0216 22:47:54.414562 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:54 crc kubenswrapper[4865]: E0216 22:47:54.414653 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:54 crc kubenswrapper[4865]: I0216 22:47:54.414710 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:54 crc kubenswrapper[4865]: E0216 22:47:54.414789 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.184406 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/1.log" Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.185367 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/0.log" Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.185472 4865 generic.go:334] "Generic (PLEG): container finished" podID="518e6107-6873-4bd2-86a6-e422763483ec" containerID="74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb" exitCode=1 Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.185549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerDied","Data":"74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb"} Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.185640 4865 scope.go:117] "RemoveContainer" containerID="4c555b648518de93a1b9ecb1bb8b232aee9016dd148db53b2b4aaf96dd23951c" Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.186729 4865 scope.go:117] "RemoveContainer" containerID="74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb" Feb 16 22:47:55 crc kubenswrapper[4865]: E0216 22:47:55.186986 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tqmsq_openshift-multus(518e6107-6873-4bd2-86a6-e422763483ec)\"" pod="openshift-multus/multus-tqmsq" podUID="518e6107-6873-4bd2-86a6-e422763483ec" Feb 16 22:47:55 crc kubenswrapper[4865]: I0216 22:47:55.208428 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bcvww" podStartSLOduration=95.208409718 podStartE2EDuration="1m35.208409718s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:47:50.157945839 +0000 UTC m=+110.481652810" watchObservedRunningTime="2026-02-16 22:47:55.208409718 +0000 UTC m=+115.532116679" Feb 16 22:47:56 crc kubenswrapper[4865]: I0216 22:47:56.190964 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/1.log" Feb 16 22:47:56 crc kubenswrapper[4865]: I0216 22:47:56.413885 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:56 crc kubenswrapper[4865]: I0216 22:47:56.413959 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:56 crc kubenswrapper[4865]: I0216 22:47:56.413982 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:56 crc kubenswrapper[4865]: I0216 22:47:56.414053 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:56 crc kubenswrapper[4865]: E0216 22:47:56.414051 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:56 crc kubenswrapper[4865]: E0216 22:47:56.414198 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:56 crc kubenswrapper[4865]: E0216 22:47:56.414306 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:56 crc kubenswrapper[4865]: E0216 22:47:56.414365 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:47:58 crc kubenswrapper[4865]: I0216 22:47:58.413820 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:47:58 crc kubenswrapper[4865]: I0216 22:47:58.413886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:47:58 crc kubenswrapper[4865]: I0216 22:47:58.413831 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:47:58 crc kubenswrapper[4865]: I0216 22:47:58.413984 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:47:58 crc kubenswrapper[4865]: E0216 22:47:58.414140 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:47:58 crc kubenswrapper[4865]: E0216 22:47:58.414325 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:47:58 crc kubenswrapper[4865]: E0216 22:47:58.414498 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:47:58 crc kubenswrapper[4865]: E0216 22:47:58.414708 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:00 crc kubenswrapper[4865]: I0216 22:48:00.414076 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:00 crc kubenswrapper[4865]: I0216 22:48:00.414076 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:00 crc kubenswrapper[4865]: I0216 22:48:00.414105 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:00 crc kubenswrapper[4865]: I0216 22:48:00.414227 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.416176 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.416376 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.416499 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.416625 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.427374 4865 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 22:48:00 crc kubenswrapper[4865]: E0216 22:48:00.552206 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 22:48:02 crc kubenswrapper[4865]: I0216 22:48:02.413468 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:02 crc kubenswrapper[4865]: I0216 22:48:02.413549 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:02 crc kubenswrapper[4865]: E0216 22:48:02.413656 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:02 crc kubenswrapper[4865]: I0216 22:48:02.413717 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:02 crc kubenswrapper[4865]: I0216 22:48:02.413718 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:02 crc kubenswrapper[4865]: E0216 22:48:02.413865 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:02 crc kubenswrapper[4865]: E0216 22:48:02.414203 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:02 crc kubenswrapper[4865]: E0216 22:48:02.414392 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:04 crc kubenswrapper[4865]: I0216 22:48:04.414038 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:04 crc kubenswrapper[4865]: I0216 22:48:04.414103 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:04 crc kubenswrapper[4865]: E0216 22:48:04.414206 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:04 crc kubenswrapper[4865]: I0216 22:48:04.414301 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:04 crc kubenswrapper[4865]: I0216 22:48:04.414345 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:04 crc kubenswrapper[4865]: E0216 22:48:04.414482 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:04 crc kubenswrapper[4865]: E0216 22:48:04.414923 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:04 crc kubenswrapper[4865]: E0216 22:48:04.415090 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:04 crc kubenswrapper[4865]: I0216 22:48:04.415163 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.225839 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.229622 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerStarted","Data":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.230386 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.267917 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podStartSLOduration=105.267890044 podStartE2EDuration="1m45.267890044s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:05.26777234 +0000 UTC m=+125.591479341" watchObservedRunningTime="2026-02-16 22:48:05.267890044 +0000 UTC m=+125.591597015" Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.402823 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ggbcr"] Feb 16 22:48:05 crc kubenswrapper[4865]: I0216 22:48:05.402981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:05 crc kubenswrapper[4865]: E0216 22:48:05.403124 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:05 crc kubenswrapper[4865]: E0216 22:48:05.553938 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 22:48:06 crc kubenswrapper[4865]: I0216 22:48:06.414137 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:06 crc kubenswrapper[4865]: I0216 22:48:06.414240 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:06 crc kubenswrapper[4865]: E0216 22:48:06.414329 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:06 crc kubenswrapper[4865]: I0216 22:48:06.414243 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:06 crc kubenswrapper[4865]: E0216 22:48:06.414537 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:06 crc kubenswrapper[4865]: E0216 22:48:06.414661 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:07 crc kubenswrapper[4865]: I0216 22:48:07.413772 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:07 crc kubenswrapper[4865]: E0216 22:48:07.413938 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:08 crc kubenswrapper[4865]: I0216 22:48:08.414492 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:08 crc kubenswrapper[4865]: I0216 22:48:08.414543 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:08 crc kubenswrapper[4865]: I0216 22:48:08.414546 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:08 crc kubenswrapper[4865]: E0216 22:48:08.414742 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:08 crc kubenswrapper[4865]: E0216 22:48:08.414886 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:08 crc kubenswrapper[4865]: E0216 22:48:08.415004 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:09 crc kubenswrapper[4865]: I0216 22:48:09.414230 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:09 crc kubenswrapper[4865]: E0216 22:48:09.414582 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:09 crc kubenswrapper[4865]: I0216 22:48:09.414707 4865 scope.go:117] "RemoveContainer" containerID="74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb" Feb 16 22:48:10 crc kubenswrapper[4865]: I0216 22:48:10.256665 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/1.log" Feb 16 22:48:10 crc kubenswrapper[4865]: I0216 22:48:10.256738 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerStarted","Data":"02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2"} Feb 16 22:48:10 crc kubenswrapper[4865]: I0216 22:48:10.413431 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:10 crc kubenswrapper[4865]: I0216 22:48:10.413394 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:10 crc kubenswrapper[4865]: I0216 22:48:10.413428 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:10 crc kubenswrapper[4865]: E0216 22:48:10.415721 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:10 crc kubenswrapper[4865]: E0216 22:48:10.415856 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:10 crc kubenswrapper[4865]: E0216 22:48:10.416115 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:10 crc kubenswrapper[4865]: E0216 22:48:10.555341 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 22:48:11 crc kubenswrapper[4865]: I0216 22:48:11.414247 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:11 crc kubenswrapper[4865]: E0216 22:48:11.414482 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:12 crc kubenswrapper[4865]: I0216 22:48:12.414366 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:12 crc kubenswrapper[4865]: I0216 22:48:12.414433 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:12 crc kubenswrapper[4865]: I0216 22:48:12.414446 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:12 crc kubenswrapper[4865]: E0216 22:48:12.414596 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:12 crc kubenswrapper[4865]: E0216 22:48:12.414698 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:12 crc kubenswrapper[4865]: E0216 22:48:12.414865 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:13 crc kubenswrapper[4865]: I0216 22:48:13.414219 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:13 crc kubenswrapper[4865]: E0216 22:48:13.414487 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:14 crc kubenswrapper[4865]: I0216 22:48:14.414134 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:14 crc kubenswrapper[4865]: I0216 22:48:14.414200 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:14 crc kubenswrapper[4865]: I0216 22:48:14.414133 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:14 crc kubenswrapper[4865]: E0216 22:48:14.414348 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 22:48:14 crc kubenswrapper[4865]: E0216 22:48:14.414447 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 22:48:14 crc kubenswrapper[4865]: E0216 22:48:14.414518 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 22:48:15 crc kubenswrapper[4865]: I0216 22:48:15.414418 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:15 crc kubenswrapper[4865]: E0216 22:48:15.414634 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ggbcr" podUID="0e0ca52e-7cb6-4d90-8d0b-4124cce13447" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.414386 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.414386 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.414566 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.417399 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.418601 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.418859 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 22:48:16 crc kubenswrapper[4865]: I0216 22:48:16.419589 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 22:48:17 crc kubenswrapper[4865]: I0216 22:48:17.414165 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:17 crc kubenswrapper[4865]: I0216 22:48:17.416850 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 22:48:17 crc kubenswrapper[4865]: I0216 22:48:17.418392 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.745676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.807404 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.808203 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.808722 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.809383 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.814465 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xcchk"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.815526 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.816633 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.816987 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.817869 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.818197 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.816997 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.822513 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.831080 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.831677 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b5kw7"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.832051 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.832463 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-676jr"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.832528 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.833005 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-89cf9"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.833719 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-669dd"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.834008 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.834243 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.834905 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.835333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.835756 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.837218 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.839045 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.839773 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.840019 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.840204 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.840429 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.840630 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.840863 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.841060 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.841256 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.841371 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.841590 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.841821 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.842188 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.842305 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.842823 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.843424 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.843552 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.849056 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.849513 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.850045 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.853377 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t624g"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.853993 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.854018 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-675st"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.854732 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.858852 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v8bqq"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.859206 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b76ph"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.861374 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.861444 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.861807 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.861857 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.861374 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.862112 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.862126 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.862327 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.862623 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.862785 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863176 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863724 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863899 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-dir\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863966 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.863990 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fbcd436-7bcc-4873-afac-24404c1acc95-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8gk\" (UniqueName: \"kubernetes.io/projected/c4592f72-2b39-47bf-beed-e53bf3865b22-kube-api-access-rs8gk\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864038 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-etcd-client\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864063 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-serving-cert\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864089 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-serving-cert\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d673e2eb-0082-4b86-91c5-5d83376c2275-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864136 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7wk\" (UniqueName: \"kubernetes.io/projected/38a65176-2b3a-47d7-ae00-8d625d7e3686-kube-api-access-cz7wk\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864273 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmkv\" (UniqueName: \"kubernetes.io/projected/3942bb88-5f12-4c66-b471-1e5ef2011792-kube-api-access-zxmkv\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864334 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3942bb88-5f12-4c66-b471-1e5ef2011792-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864381 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-config\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864414 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xsf\" (UniqueName: \"kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864442 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864456 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864632 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864715 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d673e2eb-0082-4b86-91c5-5d83376c2275-serving-cert\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532d752a-1a19-4674-97c3-d9030ad9537c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864823 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864871 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864913 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrht6\" (UniqueName: \"kubernetes.io/projected/532d752a-1a19-4674-97c3-d9030ad9537c-kube-api-access-wrht6\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.864944 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-service-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865010 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865047 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-image-import-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865130 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swq4h\" (UniqueName: \"kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865162 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-client\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865213 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdt9\" (UniqueName: \"kubernetes.io/projected/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-kube-api-access-2fdt9\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbcd436-7bcc-4873-afac-24404c1acc95-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865344 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe97f069-4baa-4f14-a478-75fe5cccefa0-serving-cert\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865396 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865436 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1971f3bb-0512-467d-b440-1330bcf97c59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865492 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4592f72-2b39-47bf-beed-e53bf3865b22-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqclx\" (UniqueName: \"kubernetes.io/projected/fe97f069-4baa-4f14-a478-75fe5cccefa0-kube-api-access-lqclx\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865633 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865858 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865893 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.875035 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.875551 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.876688 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.865633 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-client\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877049 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877162 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877200 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68pv\" (UniqueName: \"kubernetes.io/projected/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-kube-api-access-d68pv\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877240 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-encryption-config\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877304 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-config\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877369 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-serving-cert\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877397 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-node-pullsecrets\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877613 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-auth-proxy-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m7t4\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-kube-api-access-6m7t4\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877898 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532d752a-1a19-4674-97c3-d9030ad9537c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.877975 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.878295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcpg\" (UniqueName: \"kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.878568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.881915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1971f3bb-0512-467d-b440-1330bcf97c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.882229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-images\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.882379 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-config\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.882651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-policies\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.883027 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.883348 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.883686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5lx\" (UniqueName: \"kubernetes.io/projected/a0e99571-69de-43a8-9136-94d455e348c7-kube-api-access-dg5lx\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.883992 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38a65176-2b3a-47d7-ae00-8d625d7e3686-machine-approver-tls\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.884186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-audit-dir\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.884447 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-audit\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.884732 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-encryption-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.885003 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8d4k\" (UniqueName: \"kubernetes.io/projected/9fbcd436-7bcc-4873-afac-24404c1acc95-kube-api-access-t8d4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.885243 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.885511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97mp\" (UniqueName: \"kubernetes.io/projected/d673e2eb-0082-4b86-91c5-5d83376c2275-kube-api-access-f97mp\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.887983 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.891519 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.898858 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.900947 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.901389 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.901810 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.902024 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.902134 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.902210 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.902724 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.903204 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.903378 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.903621 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.903674 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-d2vf7"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.903831 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.904189 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.904934 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.905197 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.905950 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906120 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906307 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906633 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906335 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906821 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906850 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906942 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.906984 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.907024 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.907121 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.907748 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908024 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908060 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908183 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908271 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908375 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.908491 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.912031 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.912908 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.913018 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.913117 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.913204 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.913314 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.913874 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.914495 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915056 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915225 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915476 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915604 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915928 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916134 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916264 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916605 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916727 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.915148 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916853 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.916960 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.917052 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.917140 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.917241 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.919577 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.920050 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.921319 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.923441 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.923671 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.923823 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.924573 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.924961 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.925206 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.934648 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.936808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.937489 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.938326 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.938923 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.939269 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.939672 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.956653 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.956772 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.987134 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.987808 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.988569 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.989872 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.990169 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.990554 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.990893 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991096 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97mp\" (UniqueName: \"kubernetes.io/projected/d673e2eb-0082-4b86-91c5-5d83376c2275-kube-api-access-f97mp\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-audit\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991562 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-encryption-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991579 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8d4k\" (UniqueName: \"kubernetes.io/projected/9fbcd436-7bcc-4873-afac-24404c1acc95-kube-api-access-t8d4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991601 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-dir\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8gk\" (UniqueName: \"kubernetes.io/projected/c4592f72-2b39-47bf-beed-e53bf3865b22-kube-api-access-rs8gk\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991657 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fbcd436-7bcc-4873-afac-24404c1acc95-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991695 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-etcd-client\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-serving-cert\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-serving-cert\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d673e2eb-0082-4b86-91c5-5d83376c2275-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991760 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmkv\" (UniqueName: \"kubernetes.io/projected/3942bb88-5f12-4c66-b471-1e5ef2011792-kube-api-access-zxmkv\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991946 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz7wk\" (UniqueName: \"kubernetes.io/projected/38a65176-2b3a-47d7-ae00-8d625d7e3686-kube-api-access-cz7wk\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.991985 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-images\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992039 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3942bb88-5f12-4c66-b471-1e5ef2011792-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-config\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992076 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xsf\" (UniqueName: \"kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992131 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992149 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992226 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992248 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d673e2eb-0082-4b86-91c5-5d83376c2275-serving-cert\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992296 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532d752a-1a19-4674-97c3-d9030ad9537c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992347 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrht6\" (UniqueName: \"kubernetes.io/projected/532d752a-1a19-4674-97c3-d9030ad9537c-kube-api-access-wrht6\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992365 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-service-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992388 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992406 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-image-import-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992444 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7341df-56ba-4c50-8ddf-68a54b11340b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992476 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swq4h\" (UniqueName: \"kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992503 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-client\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdt9\" (UniqueName: \"kubernetes.io/projected/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-kube-api-access-2fdt9\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992552 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992576 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1971f3bb-0512-467d-b440-1330bcf97c59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992594 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992613 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbcd436-7bcc-4873-afac-24404c1acc95-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe97f069-4baa-4f14-a478-75fe5cccefa0-serving-cert\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992659 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-client\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4592f72-2b39-47bf-beed-e53bf3865b22-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992711 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqclx\" (UniqueName: \"kubernetes.io/projected/fe97f069-4baa-4f14-a478-75fe5cccefa0-kube-api-access-lqclx\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992745 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7341df-56ba-4c50-8ddf-68a54b11340b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992772 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992797 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/046ad19a-b9d5-4b51-ab15-dd974861f490-proxy-tls\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992817 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992836 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68pv\" (UniqueName: \"kubernetes.io/projected/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-kube-api-access-d68pv\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992871 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-encryption-config\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-serving-cert\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992951 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-node-pullsecrets\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992970 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-config\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.992990 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993009 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m7t4\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-kube-api-access-6m7t4\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993028 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532d752a-1a19-4674-97c3-d9030ad9537c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993047 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcpg\" (UniqueName: \"kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993069 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993099 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-auth-proxy-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993123 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1971f3bb-0512-467d-b440-1330bcf97c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993147 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-images\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993166 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.993178 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb"] Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.997702 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.998050 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d673e2eb-0082-4b86-91c5-5d83376c2275-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.998111 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-dir\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.998242 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.998593 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.999178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-config\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:19 crc kubenswrapper[4865]: I0216 22:48:19.999470 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:19.999969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.001973 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbcd436-7bcc-4873-afac-24404c1acc95-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.004020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-encryption-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.004347 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fbcd436-7bcc-4873-afac-24404c1acc95-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.004942 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1971f3bb-0512-467d-b440-1330bcf97c59-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005032 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-audit\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:19.993185 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-config\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-policies\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005601 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005647 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5lx\" (UniqueName: \"kubernetes.io/projected/a0e99571-69de-43a8-9136-94d455e348c7-kube-api-access-dg5lx\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclbh\" (UniqueName: \"kubernetes.io/projected/046ad19a-b9d5-4b51-ab15-dd974861f490-kube-api-access-mclbh\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005835 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38a65176-2b3a-47d7-ae00-8d625d7e3686-machine-approver-tls\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005867 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7341df-56ba-4c50-8ddf-68a54b11340b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.005915 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-audit-dir\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.006080 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-audit-dir\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.006635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-audit-policies\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.006759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-config\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.008108 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-service-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.008417 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.009242 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.009490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.010812 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-serving-cert\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.011205 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.011843 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.012156 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.012185 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.012266 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.012548 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.012595 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.013009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-auth-proxy-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.013039 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.014258 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/532d752a-1a19-4674-97c3-d9030ad9537c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.014938 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.015468 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8lph5"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.015917 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016090 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016367 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016390 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2h8wl"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016716 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-serving-cert\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.016971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0e99571-69de-43a8-9136-94d455e348c7-node-pullsecrets\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.017154 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.017457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0e99571-69de-43a8-9136-94d455e348c7-etcd-client\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.017517 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.017807 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.017867 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-config\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.018140 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b5kw7"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.018161 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8sd85"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.018351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4592f72-2b39-47bf-beed-e53bf3865b22-images\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.018770 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.019322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.019824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.020844 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xcchk"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.020868 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.020962 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021198 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021396 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021674 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.021957 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.022078 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.022212 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.022368 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.022826 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d673e2eb-0082-4b86-91c5-5d83376c2275-serving-cert\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.023435 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/532d752a-1a19-4674-97c3-d9030ad9537c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.025089 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-ca\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.026718 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-etcd-client\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.027362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe97f069-4baa-4f14-a478-75fe5cccefa0-service-ca-bundle\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.027507 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.028078 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.028410 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38a65176-2b3a-47d7-ae00-8d625d7e3686-config\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.028619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.028674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-serving-cert\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.029116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0e99571-69de-43a8-9136-94d455e348c7-image-import-ca\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.029118 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3942bb88-5f12-4c66-b471-1e5ef2011792-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.031134 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-etcd-client\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.031474 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-config\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.032978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.037494 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4592f72-2b39-47bf-beed-e53bf3865b22-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.037721 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe97f069-4baa-4f14-a478-75fe5cccefa0-serving-cert\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.038343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.038685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38a65176-2b3a-47d7-ae00-8d625d7e3686-machine-approver-tls\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.038738 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.045000 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1971f3bb-0512-467d-b440-1330bcf97c59-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.049999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-encryption-config\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.050105 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-669dd"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.051424 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.052702 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.052792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.054056 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-89cf9"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.054651 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.055774 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t624g"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.056868 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v8bqq"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.058216 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.060409 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.061051 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.061942 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.062961 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.064037 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.067054 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.067095 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b76ph"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.068815 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.071994 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.073680 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k58w4"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.076977 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fz5qr"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.078765 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.080919 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.083184 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.086919 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.089395 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.089656 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.090250 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.091317 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.091832 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-676jr"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.093739 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.094186 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.095236 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-675st"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.096441 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8sd85"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.097593 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.099175 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.100413 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k58w4"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.101407 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.103676 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.103705 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2h8wl"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.106887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-images\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.106988 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7341df-56ba-4c50-8ddf-68a54b11340b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.107079 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7341df-56ba-4c50-8ddf-68a54b11340b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.107125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/046ad19a-b9d5-4b51-ab15-dd974861f490-proxy-tls\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.107165 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.107220 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclbh\" (UniqueName: \"kubernetes.io/projected/046ad19a-b9d5-4b51-ab15-dd974861f490-kube-api-access-mclbh\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.107252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7341df-56ba-4c50-8ddf-68a54b11340b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.108358 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fz5qr"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.108402 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8lph5"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.108416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.108914 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.117789 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.119227 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.120975 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bvddw"] Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.123091 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.131578 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.151538 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.172778 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.191867 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.211415 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.232421 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.251973 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.292512 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.312246 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.332453 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.352920 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.362018 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7341df-56ba-4c50-8ddf-68a54b11340b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.372390 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.378389 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7341df-56ba-4c50-8ddf-68a54b11340b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.392323 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.413475 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.436261 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.452566 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.472318 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.492441 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.511620 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.518519 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/046ad19a-b9d5-4b51-ab15-dd974861f490-images\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.533954 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.552455 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.567895 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/046ad19a-b9d5-4b51-ab15-dd974861f490-proxy-tls\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.592565 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.613912 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.632688 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.651950 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.673460 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.693740 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.712768 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.749751 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmkv\" (UniqueName: \"kubernetes.io/projected/3942bb88-5f12-4c66-b471-1e5ef2011792-kube-api-access-zxmkv\") pod \"cluster-samples-operator-665b6dd947-vgbjc\" (UID: \"3942bb88-5f12-4c66-b471-1e5ef2011792\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.781540 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8gk\" (UniqueName: \"kubernetes.io/projected/c4592f72-2b39-47bf-beed-e53bf3865b22-kube-api-access-rs8gk\") pod \"machine-api-operator-5694c8668f-676jr\" (UID: \"c4592f72-2b39-47bf-beed-e53bf3865b22\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.798375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97mp\" (UniqueName: \"kubernetes.io/projected/d673e2eb-0082-4b86-91c5-5d83376c2275-kube-api-access-f97mp\") pod \"openshift-config-operator-7777fb866f-xcchk\" (UID: \"d673e2eb-0082-4b86-91c5-5d83376c2275\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.811131 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8d4k\" (UniqueName: \"kubernetes.io/projected/9fbcd436-7bcc-4873-afac-24404c1acc95-kube-api-access-t8d4k\") pod \"openshift-controller-manager-operator-756b6f6bc6-f6stv\" (UID: \"9fbcd436-7bcc-4873-afac-24404c1acc95\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.812972 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.816937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.832615 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.853594 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.873603 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.893214 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.935204 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.954086 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz7wk\" (UniqueName: \"kubernetes.io/projected/38a65176-2b3a-47d7-ae00-8d625d7e3686-kube-api-access-cz7wk\") pod \"machine-approver-56656f9798-kqnz8\" (UID: \"38a65176-2b3a-47d7-ae00-8d625d7e3686\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.979326 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrht6\" (UniqueName: \"kubernetes.io/projected/532d752a-1a19-4674-97c3-d9030ad9537c-kube-api-access-wrht6\") pod \"openshift-apiserver-operator-796bbdcf4f-bhtg8\" (UID: \"532d752a-1a19-4674-97c3-d9030ad9537c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.994399 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5lx\" (UniqueName: \"kubernetes.io/projected/a0e99571-69de-43a8-9136-94d455e348c7-kube-api-access-dg5lx\") pod \"apiserver-76f77b778f-89cf9\" (UID: \"a0e99571-69de-43a8-9136-94d455e348c7\") " pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:20 crc kubenswrapper[4865]: I0216 22:48:20.994789 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.001583 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.006973 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.014231 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.021799 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.022300 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcpg\" (UniqueName: \"kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg\") pod \"controller-manager-879f6c89f-pzmrk\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.030679 4865 request.go:700] Waited for 1.017884154s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.030755 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.033770 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.039975 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.081036 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68pv\" (UniqueName: \"kubernetes.io/projected/b00a412f-f2d5-4ad5-a9e5-67efc9e40682-kube-api-access-d68pv\") pod \"apiserver-7bbb656c7d-b8dbg\" (UID: \"b00a412f-f2d5-4ad5-a9e5-67efc9e40682\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.096639 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swq4h\" (UniqueName: \"kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h\") pod \"console-f9d7485db-5m56v\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.105649 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.113982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m7t4\" (UniqueName: \"kubernetes.io/projected/1971f3bb-0512-467d-b440-1330bcf97c59-kube-api-access-6m7t4\") pod \"cluster-image-registry-operator-dc59b4c8b-t7gvl\" (UID: \"1971f3bb-0512-467d-b440-1330bcf97c59\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.133042 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.139561 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.146429 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdt9\" (UniqueName: \"kubernetes.io/projected/8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd-kube-api-access-2fdt9\") pod \"etcd-operator-b45778765-b5kw7\" (UID: \"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.152196 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.169157 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.171574 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.191970 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.205742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.211865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.232692 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.259120 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.271743 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.293403 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.305936 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xcchk"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.312370 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.330799 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" event={"ID":"38a65176-2b3a-47d7-ae00-8d625d7e3686","Type":"ContainerStarted","Data":"1f58dee63ddff4e8350e6f8bde393c2a923d57cb65e615d01dbed7eefe82bb76"} Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.334568 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.341007 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-676jr"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.353389 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.348329 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd673e2eb_0082_4b86_91c5_5d83376c2275.slice/crio-00fbb1f33a5a0fedb1de42cd187f288fa0be4683e5019fed8847625444928af4 WatchSource:0}: Error finding container 00fbb1f33a5a0fedb1de42cd187f288fa0be4683e5019fed8847625444928af4: Status 404 returned error can't find the container with id 00fbb1f33a5a0fedb1de42cd187f288fa0be4683e5019fed8847625444928af4 Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.361763 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4592f72_2b39_47bf_beed_e53bf3865b22.slice/crio-a54dffae0eb7c5445cf204bbc07db0bad25baa8271ddb6f38573c29c2509dfb3 WatchSource:0}: Error finding container a54dffae0eb7c5445cf204bbc07db0bad25baa8271ddb6f38573c29c2509dfb3: Status 404 returned error can't find the container with id a54dffae0eb7c5445cf204bbc07db0bad25baa8271ddb6f38573c29c2509dfb3 Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.363742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.367899 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.373992 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.394418 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.396952 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fbcd436_7bcc_4873_afac_24404c1acc95.slice/crio-488fd453906b9e4baabd93b2a2faa636aa13b3d3e2a46b27e6fe8444997bf0ad WatchSource:0}: Error finding container 488fd453906b9e4baabd93b2a2faa636aa13b3d3e2a46b27e6fe8444997bf0ad: Status 404 returned error can't find the container with id 488fd453906b9e4baabd93b2a2faa636aa13b3d3e2a46b27e6fe8444997bf0ad Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.413200 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.423818 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.435266 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.453026 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.453973 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.478547 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.483636 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-89cf9"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.503989 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.507981 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b5kw7"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.511765 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.526038 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.532768 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.545736 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8"] Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.547269 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566ba776_350d_4994_948d_bbbf37ae5ddc.slice/crio-99239deb099453e4a80ca1be970e47636af070cbe29f2f4d37eb93000d2d6952 WatchSource:0}: Error finding container 99239deb099453e4a80ca1be970e47636af070cbe29f2f4d37eb93000d2d6952: Status 404 returned error can't find the container with id 99239deb099453e4a80ca1be970e47636af070cbe29f2f4d37eb93000d2d6952 Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.551233 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.566512 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532d752a_1a19_4674_97c3_d9030ad9537c.slice/crio-a73d5c9c09df5c339a58111331ed2aa30bbc2bdab22c00ecf84decc87ec7439e WatchSource:0}: Error finding container a73d5c9c09df5c339a58111331ed2aa30bbc2bdab22c00ecf84decc87ec7439e: Status 404 returned error can't find the container with id a73d5c9c09df5c339a58111331ed2aa30bbc2bdab22c00ecf84decc87ec7439e Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.572093 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.593903 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.613014 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.615768 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.631524 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl"] Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.634635 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.642931 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00a412f_f2d5_4ad5_a9e5_67efc9e40682.slice/crio-c276b4e4083cda6693523a37d124689968e6c1f4407ce34a4250872bde84bf06 WatchSource:0}: Error finding container c276b4e4083cda6693523a37d124689968e6c1f4407ce34a4250872bde84bf06: Status 404 returned error can't find the container with id c276b4e4083cda6693523a37d124689968e6c1f4407ce34a4250872bde84bf06 Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.660309 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 22:48:21 crc kubenswrapper[4865]: W0216 22:48:21.663693 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1971f3bb_0512_467d_b440_1330bcf97c59.slice/crio-091ea43e5db089e2292d4e85b4fc51ce40b8224c77d6f570d575e650de3809cf WatchSource:0}: Error finding container 091ea43e5db089e2292d4e85b4fc51ce40b8224c77d6f570d575e650de3809cf: Status 404 returned error can't find the container with id 091ea43e5db089e2292d4e85b4fc51ce40b8224c77d6f570d575e650de3809cf Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.673736 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.691250 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.711892 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.732247 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.752940 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.771995 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.795878 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.812766 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.835109 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.852877 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.874261 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.892571 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.911083 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.948871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqclx\" (UniqueName: \"kubernetes.io/projected/fe97f069-4baa-4f14-a478-75fe5cccefa0-kube-api-access-lqclx\") pod \"authentication-operator-69f744f599-669dd\" (UID: \"fe97f069-4baa-4f14-a478-75fe5cccefa0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.967391 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xsf\" (UniqueName: \"kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf\") pod \"route-controller-manager-6576b87f9c-tztwm\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.970840 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 22:48:21 crc kubenswrapper[4865]: I0216 22:48:21.993023 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.011808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.032973 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.050182 4865 request.go:700] Waited for 1.968893042s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.052682 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.073675 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.092036 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.129241 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7341df-56ba-4c50-8ddf-68a54b11340b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8tpfm\" (UID: \"2e7341df-56ba-4c50-8ddf-68a54b11340b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.147981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclbh\" (UniqueName: \"kubernetes.io/projected/046ad19a-b9d5-4b51-ab15-dd974861f490-kube-api-access-mclbh\") pod \"machine-config-operator-74547568cd-dqspk\" (UID: \"046ad19a-b9d5-4b51-ab15-dd974861f490\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.152808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.170565 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.172504 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.182917 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.192537 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241147 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241320 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241539 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241608 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241645 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47dbdc59-3676-456b-b077-8943ea216379-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z546m\" (UniqueName: \"kubernetes.io/projected/be88c107-f404-4d93-b59e-471220c045ec-kube-api-access-z546m\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241837 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241871 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241904 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9r5\" (UniqueName: \"kubernetes.io/projected/852722bc-cd18-4344-b8d1-a01be5c6ea33-kube-api-access-lg9r5\") pod \"downloads-7954f5f757-t624g\" (UID: \"852722bc-cd18-4344-b8d1-a01be5c6ea33\") " pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.241961 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5b8f\" (UniqueName: \"kubernetes.io/projected/715cd16c-2512-4885-ae7b-437fd61fcea2-kube-api-access-x5b8f\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242048 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5d4\" (UniqueName: \"kubernetes.io/projected/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-kube-api-access-7h5d4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242226 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47dbdc59-3676-456b-b077-8943ea216379-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242332 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-config\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242377 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242508 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-metrics-certs\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsbj7\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242660 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/715cd16c-2512-4885-ae7b-437fd61fcea2-metrics-tls\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242763 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242834 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242870 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.242959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-default-certificate\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.243020 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-stats-auth\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.243054 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-trusted-ca\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247387 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247469 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-service-ca-bundle\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.247932 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.248033 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhkj\" (UniqueName: \"kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.248410 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.253539 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5l4m\" (UniqueName: \"kubernetes.io/projected/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-kube-api-access-t5l4m\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.253717 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be88c107-f404-4d93-b59e-471220c045ec-serving-cert\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.253818 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.258157 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:22.758119344 +0000 UTC m=+143.081826325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.258901 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dbdc59-3676-456b-b077-8943ea216379-config\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.286052 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.299298 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.344231 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" event={"ID":"c4592f72-2b39-47bf-beed-e53bf3865b22","Type":"ContainerStarted","Data":"458861664dc591d771b1fdab85354675d8e38e52ce6056a5eaa181c3e7411865"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.344300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" event={"ID":"c4592f72-2b39-47bf-beed-e53bf3865b22","Type":"ContainerStarted","Data":"7c8997b23135e546063ba40887209138bcebbc2dd24d8f313a671b4bf3e5ca3e"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.344315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" event={"ID":"c4592f72-2b39-47bf-beed-e53bf3865b22","Type":"ContainerStarted","Data":"a54dffae0eb7c5445cf204bbc07db0bad25baa8271ddb6f38573c29c2509dfb3"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.348765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" event={"ID":"3942bb88-5f12-4c66-b471-1e5ef2011792","Type":"ContainerStarted","Data":"48e5cb7322eeac918424fea1ecdd5f7ee46956839072c54d9edf4e1f2acaff94"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.348809 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" event={"ID":"3942bb88-5f12-4c66-b471-1e5ef2011792","Type":"ContainerStarted","Data":"655702226f52bbdd582f2d5d61f720d1d4570c530d75e456c520e22bbd9fc613"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.348822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" event={"ID":"3942bb88-5f12-4c66-b471-1e5ef2011792","Type":"ContainerStarted","Data":"76409a24e0ee4493b86c3179fc2e6f462a1ce5bcec722a04a80eb666530516d1"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.354077 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" event={"ID":"38a65176-2b3a-47d7-ae00-8d625d7e3686","Type":"ContainerStarted","Data":"5da43bcabde38a7707fc5c669abcd80cc2182e42dbfe4a9b488c2aa0c4ebe9b4"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.354151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" event={"ID":"38a65176-2b3a-47d7-ae00-8d625d7e3686","Type":"ContainerStarted","Data":"9dce84e212f8be038f6d604c050a54460c9cb1168218ac1404ed54fcb26adb6a"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360221 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360307 4865 generic.go:334] "Generic (PLEG): container finished" podID="d673e2eb-0082-4b86-91c5-5d83376c2275" containerID="9cc8a36d7ef51d084ed985e39cb1f588b96e110d717daaed98156f94606ac957" exitCode=0 Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360533 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360597 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/572227eb-5d34-4fef-a0ea-f01abc14b398-kube-api-access-qrmgs\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-plugins-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360643 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-metrics-tls\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360669 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-service-ca-bundle\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360717 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360747 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhkj\" (UniqueName: \"kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360792 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360816 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09255158-b7b9-4a34-8fe3-0b7864ec5f11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360845 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be88c107-f404-4d93-b59e-471220c045ec-serving-cert\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-certs\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360889 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxm52\" (UniqueName: \"kubernetes.io/projected/f6b57ccb-043a-40ad-b510-2ed6b3683c97-kube-api-access-dxm52\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-socket-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-csi-data-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360957 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6bz\" (UniqueName: \"kubernetes.io/projected/568e629f-a067-47f4-a6e9-ca18ca03e582-kube-api-access-lp6bz\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.360983 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46a7e06e-2a57-4c98-9cf9-36c417623bb7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361291 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" event={"ID":"d673e2eb-0082-4b86-91c5-5d83376c2275","Type":"ContainerDied","Data":"9cc8a36d7ef51d084ed985e39cb1f588b96e110d717daaed98156f94606ac957"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361323 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" event={"ID":"d673e2eb-0082-4b86-91c5-5d83376c2275","Type":"ContainerStarted","Data":"00fbb1f33a5a0fedb1de42cd187f288fa0be4683e5019fed8847625444928af4"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dbdc59-3676-456b-b077-8943ea216379-config\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361644 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-proxy-tls\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361691 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxp9m\" (UniqueName: \"kubernetes.io/projected/526bdcb9-a06a-4d6a-9793-078e20245455-kube-api-access-vxp9m\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361714 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361739 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361810 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09255158-b7b9-4a34-8fe3-0b7864ec5f11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361861 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlxn\" (UniqueName: \"kubernetes.io/projected/29de9344-e059-4080-8c9f-9d07027204f7-kube-api-access-snlxn\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361886 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfpt\" (UniqueName: \"kubernetes.io/projected/48530ee9-1daf-46ac-96d9-3439330122c7-kube-api-access-prfpt\") pod \"migrator-59844c95c7-8l2mn\" (UID: \"48530ee9-1daf-46ac-96d9-3439330122c7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg9r5\" (UniqueName: \"kubernetes.io/projected/852722bc-cd18-4344-b8d1-a01be5c6ea33-kube-api-access-lg9r5\") pod \"downloads-7954f5f757-t624g\" (UID: \"852722bc-cd18-4344-b8d1-a01be5c6ea33\") " pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361947 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.361973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362002 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5d4\" (UniqueName: \"kubernetes.io/projected/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-kube-api-access-7h5d4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362043 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-srv-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362068 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47dbdc59-3676-456b-b077-8943ea216379-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362091 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7305176e-a416-41e7-8a1c-a92169a8a882-signing-key\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362150 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-mountpoint-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362181 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gwt\" (UniqueName: \"kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362207 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsbj7\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362235 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362258 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jq9\" (UniqueName: \"kubernetes.io/projected/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-kube-api-access-t5jq9\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362310 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362346 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-stats-auth\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362374 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-trusted-ca\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362424 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362500 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2s8\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-kube-api-access-pz2s8\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362523 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362552 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4jz\" (UniqueName: \"kubernetes.io/projected/46a7e06e-2a57-4c98-9cf9-36c417623bb7-kube-api-access-pm4jz\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bfl\" (UniqueName: \"kubernetes.io/projected/596fa99e-76fb-4442-ae6d-7dc7e632f377-kube-api-access-f7bfl\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362611 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-webhook-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5l4m\" (UniqueName: \"kubernetes.io/projected/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-kube-api-access-t5l4m\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362674 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-config-volume\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6gg\" (UniqueName: \"kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362756 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6tm\" (UniqueName: \"kubernetes.io/projected/fcbe8037-e83d-4581-9759-a7accf45937a-kube-api-access-xw6tm\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362822 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568e629f-a067-47f4-a6e9-ca18ca03e582-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362860 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362882 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09255158-b7b9-4a34-8fe3-0b7864ec5f11-config\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.362918 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370139 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370794 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/596fa99e-76fb-4442-ae6d-7dc7e632f377-serving-cert\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370827 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk8kp\" (UniqueName: \"kubernetes.io/projected/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-kube-api-access-rk8kp\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47dbdc59-3676-456b-b077-8943ea216379-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/572227eb-5d34-4fef-a0ea-f01abc14b398-cert\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370943 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z546m\" (UniqueName: \"kubernetes.io/projected/be88c107-f404-4d93-b59e-471220c045ec-kube-api-access-z546m\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.370970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.371001 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/596fa99e-76fb-4442-ae6d-7dc7e632f377-config\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.371036 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.371073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5b8f\" (UniqueName: \"kubernetes.io/projected/715cd16c-2512-4885-ae7b-437fd61fcea2-kube-api-access-x5b8f\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.371096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcbe8037-e83d-4581-9759-a7accf45937a-tmpfs\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.371122 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.372298 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-trusted-ca\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.372400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.372937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.373212 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.373521 4865 generic.go:334] "Generic (PLEG): container finished" podID="a0e99571-69de-43a8-9136-94d455e348c7" containerID="bbab024fc2e64f0eae307e023257804582e6ec2c5a07c3f1c3910244e8cdd571" exitCode=0 Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.375184 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be88c107-f404-4d93-b59e-471220c045ec-serving-cert\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.376179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.376186 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.376794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47dbdc59-3676-456b-b077-8943ea216379-config\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.377453 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.378026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.378483 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" event={"ID":"a0e99571-69de-43a8-9136-94d455e348c7","Type":"ContainerDied","Data":"bbab024fc2e64f0eae307e023257804582e6ec2c5a07c3f1c3910244e8cdd571"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.378543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" event={"ID":"a0e99571-69de-43a8-9136-94d455e348c7","Type":"ContainerStarted","Data":"7c5167cf897263c5715d58ec7ab505128dc78a480efa09f49239c41d60ff333a"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.379443 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.379726 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" event={"ID":"532d752a-1a19-4674-97c3-d9030ad9537c","Type":"ContainerStarted","Data":"7f540677cee517af4c67e2736e40b67815446275293223547960520027d8bc73"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.379775 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" event={"ID":"532d752a-1a19-4674-97c3-d9030ad9537c","Type":"ContainerStarted","Data":"a73d5c9c09df5c339a58111331ed2aa30bbc2bdab22c00ecf84decc87ec7439e"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.380064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-service-ca-bundle\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.380240 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmk2h\" (UniqueName: \"kubernetes.io/projected/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-kube-api-access-tmk2h\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.380300 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7305176e-a416-41e7-8a1c-a92169a8a882-signing-cabundle\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.380703 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:22.880349611 +0000 UTC m=+143.204056572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.380737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-registration-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.380786 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-node-bootstrap-token\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.381557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-config\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.381608 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-profile-collector-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.382098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.382610 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be88c107-f404-4d93-b59e-471220c045ec-config\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384505 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-metrics-certs\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5m56v" event={"ID":"566ba776-350d-4994-948d-bbbf37ae5ddc","Type":"ContainerStarted","Data":"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5m56v" event={"ID":"566ba776-350d-4994-948d-bbbf37ae5ddc","Type":"ContainerStarted","Data":"99239deb099453e4a80ca1be970e47636af070cbe29f2f4d37eb93000d2d6952"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384590 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.384890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/715cd16c-2512-4885-ae7b-437fd61fcea2-metrics-tls\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.385392 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47dbdc59-3676-456b-b077-8943ea216379-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.386919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568e629f-a067-47f4-a6e9-ca18ca03e582-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387030 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387347 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387448 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387490 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-srv-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387595 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-default-certificate\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387636 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvw5\" (UniqueName: \"kubernetes.io/projected/7305176e-a416-41e7-8a1c-a92169a8a882-kube-api-access-qqvw5\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.387849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bcf\" (UniqueName: \"kubernetes.io/projected/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-kube-api-access-z8bcf\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.390094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.390609 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-default-certificate\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.390738 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.390775 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.390776 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.391372 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-stats-auth\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.395387 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-metrics-certs\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.395901 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/715cd16c-2512-4885-ae7b-437fd61fcea2-metrics-tls\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.396087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.396600 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.400127 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.400679 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.409375 4865 generic.go:334] "Generic (PLEG): container finished" podID="b00a412f-f2d5-4ad5-a9e5-67efc9e40682" containerID="7b3de931ba38e15dc525a3af233e74a94c4847336cfe2eb5193e53e702eb064a" exitCode=0 Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.409501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" event={"ID":"b00a412f-f2d5-4ad5-a9e5-67efc9e40682","Type":"ContainerDied","Data":"7b3de931ba38e15dc525a3af233e74a94c4847336cfe2eb5193e53e702eb064a"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.409533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" event={"ID":"b00a412f-f2d5-4ad5-a9e5-67efc9e40682","Type":"ContainerStarted","Data":"c276b4e4083cda6693523a37d124689968e6c1f4407ce34a4250872bde84bf06"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.410686 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47dbdc59-3676-456b-b077-8943ea216379-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ls5tt\" (UID: \"47dbdc59-3676-456b-b077-8943ea216379\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.413621 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" event={"ID":"4e04fc9e-f926-473e-bdf6-f59166ab52f0","Type":"ContainerStarted","Data":"d4da7e037d0f8edbc19f1d658169fa06e4e45fc0e7c6e3882cae996325b6a4f8"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.413691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" event={"ID":"4e04fc9e-f926-473e-bdf6-f59166ab52f0","Type":"ContainerStarted","Data":"abd5767beec1b839d22b2e4166262ac6416520be79a4317285a9f3f5d2187417"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.413710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.414112 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhkj\" (UniqueName: \"kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj\") pod \"oauth-openshift-558db77b4-675st\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.420633 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pzmrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.420703 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.438140 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5l4m\" (UniqueName: \"kubernetes.io/projected/ae87db3d-c73d-4e96-8cda-ca3ac846f9b5-kube-api-access-t5l4m\") pod \"router-default-5444994796-d2vf7\" (UID: \"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5\") " pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461839 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" event={"ID":"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd","Type":"ContainerStarted","Data":"13a1dd5ede00dce29e8859a64bec5038a3c47d0525b8c70f85d65d63695de3fb"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461902 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-669dd"] Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461925 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" event={"ID":"8ae2e7ca-329c-4bd8-b003-05fa0d7f60fd","Type":"ContainerStarted","Data":"95cee9ae3be72a5a384f29a33003340e6b2213a60d1552a869c733fab5dadf79"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461937 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" event={"ID":"9fbcd436-7bcc-4873-afac-24404c1acc95","Type":"ContainerStarted","Data":"4a34a4fe29b0c8114ff004bedf6059d5b2ccd34def1e727c5ffe532ae32527bb"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" event={"ID":"9fbcd436-7bcc-4873-afac-24404c1acc95","Type":"ContainerStarted","Data":"488fd453906b9e4baabd93b2a2faa636aa13b3d3e2a46b27e6fe8444997bf0ad"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" event={"ID":"1971f3bb-0512-467d-b440-1330bcf97c59","Type":"ContainerStarted","Data":"933cfa2348cbf6af715f50fe68bc761bca11549d4693d3f5c578c7734ac6040f"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.461974 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" event={"ID":"1971f3bb-0512-467d-b440-1330bcf97c59","Type":"ContainerStarted","Data":"091ea43e5db089e2292d4e85b4fc51ce40b8224c77d6f570d575e650de3809cf"} Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.465020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsbj7\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491565 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-webhook-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-config-volume\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491658 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6gg\" (UniqueName: \"kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491676 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw6tm\" (UniqueName: \"kubernetes.io/projected/fcbe8037-e83d-4581-9759-a7accf45937a-kube-api-access-xw6tm\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568e629f-a067-47f4-a6e9-ca18ca03e582-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491791 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09255158-b7b9-4a34-8fe3-0b7864ec5f11-config\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491824 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/596fa99e-76fb-4442-ae6d-7dc7e632f377-serving-cert\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk8kp\" (UniqueName: \"kubernetes.io/projected/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-kube-api-access-rk8kp\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/572227eb-5d34-4fef-a0ea-f01abc14b398-cert\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491910 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491935 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/596fa99e-76fb-4442-ae6d-7dc7e632f377-config\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.491975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcbe8037-e83d-4581-9759-a7accf45937a-tmpfs\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmk2h\" (UniqueName: \"kubernetes.io/projected/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-kube-api-access-tmk2h\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492035 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7305176e-a416-41e7-8a1c-a92169a8a882-signing-cabundle\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492051 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-registration-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492080 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-node-bootstrap-token\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492099 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-profile-collector-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492162 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568e629f-a067-47f4-a6e9-ca18ca03e582-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492208 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-srv-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492228 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492256 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvw5\" (UniqueName: \"kubernetes.io/projected/7305176e-a416-41e7-8a1c-a92169a8a882-kube-api-access-qqvw5\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492288 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bcf\" (UniqueName: \"kubernetes.io/projected/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-kube-api-access-z8bcf\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492360 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-plugins-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492376 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/572227eb-5d34-4fef-a0ea-f01abc14b398-kube-api-access-qrmgs\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492391 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-metrics-tls\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492427 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09255158-b7b9-4a34-8fe3-0b7864ec5f11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-certs\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492492 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxm52\" (UniqueName: \"kubernetes.io/projected/f6b57ccb-043a-40ad-b510-2ed6b3683c97-kube-api-access-dxm52\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492511 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6bz\" (UniqueName: \"kubernetes.io/projected/568e629f-a067-47f4-a6e9-ca18ca03e582-kube-api-access-lp6bz\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492529 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46a7e06e-2a57-4c98-9cf9-36c417623bb7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492546 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-socket-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492562 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-csi-data-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492606 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-proxy-tls\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492622 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxp9m\" (UniqueName: \"kubernetes.io/projected/526bdcb9-a06a-4d6a-9793-078e20245455-kube-api-access-vxp9m\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.505725 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-registration-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.507753 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7305176e-a416-41e7-8a1c-a92169a8a882-signing-cabundle\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.509913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-csi-data-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.509939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/596fa99e-76fb-4442-ae6d-7dc7e632f377-config\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.510584 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.511553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcbe8037-e83d-4581-9759-a7accf45937a-tmpfs\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.513109 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.514971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-socket-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.492663 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.515917 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09255158-b7b9-4a34-8fe3-0b7864ec5f11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.515955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.515984 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prfpt\" (UniqueName: \"kubernetes.io/projected/48530ee9-1daf-46ac-96d9-3439330122c7-kube-api-access-prfpt\") pod \"migrator-59844c95c7-8l2mn\" (UID: \"48530ee9-1daf-46ac-96d9-3439330122c7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516005 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snlxn\" (UniqueName: \"kubernetes.io/projected/29de9344-e059-4080-8c9f-9d07027204f7-kube-api-access-snlxn\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516085 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-srv-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516138 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7305176e-a416-41e7-8a1c-a92169a8a882-signing-key\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516184 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gwt\" (UniqueName: \"kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516204 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-mountpoint-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516301 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jq9\" (UniqueName: \"kubernetes.io/projected/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-kube-api-access-t5jq9\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516464 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2s8\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-kube-api-access-pz2s8\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516510 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bfl\" (UniqueName: \"kubernetes.io/projected/596fa99e-76fb-4442-ae6d-7dc7e632f377-kube-api-access-f7bfl\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.516530 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm4jz\" (UniqueName: \"kubernetes.io/projected/46a7e06e-2a57-4c98-9cf9-36c417623bb7-kube-api-access-pm4jz\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.526239 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-config-volume\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.527843 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568e629f-a067-47f4-a6e9-ca18ca03e582-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.533621 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.033596552 +0000 UTC m=+143.357303513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.539999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-certs\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.540440 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.545676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-mountpoint-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.550372 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.557046 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09255158-b7b9-4a34-8fe3-0b7864ec5f11-config\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.558898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29de9344-e059-4080-8c9f-9d07027204f7-plugins-dir\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.563251 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.564011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg9r5\" (UniqueName: \"kubernetes.io/projected/852722bc-cd18-4344-b8d1-a01be5c6ea33-kube-api-access-lg9r5\") pod \"downloads-7954f5f757-t624g\" (UID: \"852722bc-cd18-4344-b8d1-a01be5c6ea33\") " pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.569061 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.570828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5b8f\" (UniqueName: \"kubernetes.io/projected/715cd16c-2512-4885-ae7b-437fd61fcea2-kube-api-access-x5b8f\") pod \"dns-operator-744455d44c-b76ph\" (UID: \"715cd16c-2512-4885-ae7b-437fd61fcea2\") " pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.570994 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.572125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7305176e-a416-41e7-8a1c-a92169a8a882-signing-key\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.572579 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-profile-collector-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.573222 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.574621 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.575964 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.577060 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.580550 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/526bdcb9-a06a-4d6a-9793-078e20245455-srv-cert\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.586650 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/596fa99e-76fb-4442-ae6d-7dc7e632f377-serving-cert\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.586873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-srv-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.587473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.588454 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/46a7e06e-2a57-4c98-9cf9-36c417623bb7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.588912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-proxy-tls\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.589263 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568e629f-a067-47f4-a6e9-ca18ca03e582-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.589412 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-node-bootstrap-token\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.589722 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/572227eb-5d34-4fef-a0ea-f01abc14b398-cert\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.589934 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5d4\" (UniqueName: \"kubernetes.io/projected/5cfba6b6-3d1e-49d9-902e-b3493e1ffc97-kube-api-access-7h5d4\") pod \"control-plane-machine-set-operator-78cbb6b69f-qcl4t\" (UID: \"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.590523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-webhook-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.591108 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.591316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09255158-b7b9-4a34-8fe3-0b7864ec5f11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: W0216 22:48:22.593388 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod679f6150_3ecb_437d_81bc_9877ad5c3cc4.slice/crio-79362843e111cb41fa3710e0f9a933b24630d83360b4477377be3d7f8516177e WatchSource:0}: Error finding container 79362843e111cb41fa3710e0f9a933b24630d83360b4477377be3d7f8516177e: Status 404 returned error can't find the container with id 79362843e111cb41fa3710e0f9a933b24630d83360b4477377be3d7f8516177e Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.593994 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-metrics-tls\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.595977 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f6b57ccb-043a-40ad-b510-2ed6b3683c97-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.602551 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z546m\" (UniqueName: \"kubernetes.io/projected/be88c107-f404-4d93-b59e-471220c045ec-kube-api-access-z546m\") pod \"console-operator-58897d9998-v8bqq\" (UID: \"be88c107-f404-4d93-b59e-471220c045ec\") " pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.603117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcbe8037-e83d-4581-9759-a7accf45937a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.613741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.617611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.620779 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.12075604 +0000 UTC m=+143.444463001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.621707 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmgs\" (UniqueName: \"kubernetes.io/projected/572227eb-5d34-4fef-a0ea-f01abc14b398-kube-api-access-qrmgs\") pod \"ingress-canary-fz5qr\" (UID: \"572227eb-5d34-4fef-a0ea-f01abc14b398\") " pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.639696 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6bz\" (UniqueName: \"kubernetes.io/projected/568e629f-a067-47f4-a6e9-ca18ca03e582-kube-api-access-lp6bz\") pod \"kube-storage-version-migrator-operator-b67b599dd-m696v\" (UID: \"568e629f-a067-47f4-a6e9-ca18ca03e582\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.641297 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm"] Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.650191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmk2h\" (UniqueName: \"kubernetes.io/projected/ca74042e-8e33-4ee4-aaa9-57fe06d4c710-kube-api-access-tmk2h\") pod \"dns-default-8sd85\" (UID: \"ca74042e-8e33-4ee4-aaa9-57fe06d4c710\") " pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.662657 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.677887 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxp9m\" (UniqueName: \"kubernetes.io/projected/526bdcb9-a06a-4d6a-9793-078e20245455-kube-api-access-vxp9m\") pod \"catalog-operator-68c6474976-z5w9k\" (UID: \"526bdcb9-a06a-4d6a-9793-078e20245455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.686461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk"] Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.689003 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxm52\" (UniqueName: \"kubernetes.io/projected/f6b57ccb-043a-40ad-b510-2ed6b3683c97-kube-api-access-dxm52\") pod \"olm-operator-6b444d44fb-zfxvb\" (UID: \"f6b57ccb-043a-40ad-b510-2ed6b3683c97\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.711519 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm4jz\" (UniqueName: \"kubernetes.io/projected/46a7e06e-2a57-4c98-9cf9-36c417623bb7-kube-api-access-pm4jz\") pod \"multus-admission-controller-857f4d67dd-8lph5\" (UID: \"46a7e06e-2a57-4c98-9cf9-36c417623bb7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.729743 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6gg\" (UniqueName: \"kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg\") pod \"marketplace-operator-79b997595-xrmqm\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.730968 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.731592 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.231579606 +0000 UTC m=+143.555286567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.731913 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.754017 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw6tm\" (UniqueName: \"kubernetes.io/projected/fcbe8037-e83d-4581-9759-a7accf45937a-kube-api-access-xw6tm\") pod \"packageserver-d55dfcdfc-6zkrs\" (UID: \"fcbe8037-e83d-4581-9759-a7accf45937a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.778097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prfpt\" (UniqueName: \"kubernetes.io/projected/48530ee9-1daf-46ac-96d9-3439330122c7-kube-api-access-prfpt\") pod \"migrator-59844c95c7-8l2mn\" (UID: \"48530ee9-1daf-46ac-96d9-3439330122c7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.785829 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fz5qr" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.792271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snlxn\" (UniqueName: \"kubernetes.io/projected/29de9344-e059-4080-8c9f-9d07027204f7-kube-api-access-snlxn\") pod \"csi-hostpathplugin-k58w4\" (UID: \"29de9344-e059-4080-8c9f-9d07027204f7\") " pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.832231 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.832732 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.332681497 +0000 UTC m=+143.656388458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.833180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.834016 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.334005644 +0000 UTC m=+143.657712605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.835338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk8kp\" (UniqueName: \"kubernetes.io/projected/7d625868-f11c-4bcd-b1f0-dbe1d50d24ba-kube-api-access-rk8kp\") pod \"machine-config-controller-84d6567774-f7dqg\" (UID: \"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.837455 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gwt\" (UniqueName: \"kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt\") pod \"collect-profiles-29521365-bs68x\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.848118 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.860224 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2s8\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-kube-api-access-pz2s8\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.862742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.872653 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.873604 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09255158-b7b9-4a34-8fe3-0b7864ec5f11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qq9vk\" (UID: \"09255158-b7b9-4a34-8fe3-0b7864ec5f11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.897618 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt"] Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.906299 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mkpj\" (UID: \"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.916089 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jq9\" (UniqueName: \"kubernetes.io/projected/ea816f0c-3205-4862-ace4-8caf8bc8d7a9-kube-api-access-t5jq9\") pod \"package-server-manager-789f6589d5-lwxck\" (UID: \"ea816f0c-3205-4862-ace4-8caf8bc8d7a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.920884 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.929336 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.934849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:22 crc kubenswrapper[4865]: E0216 22:48:22.935269 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.435251349 +0000 UTC m=+143.758958310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.938926 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.945739 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.946030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bfl\" (UniqueName: \"kubernetes.io/projected/596fa99e-76fb-4442-ae6d-7dc7e632f377-kube-api-access-f7bfl\") pod \"service-ca-operator-777779d784-hmgzd\" (UID: \"596fa99e-76fb-4442-ae6d-7dc7e632f377\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.953740 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.966110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvw5\" (UniqueName: \"kubernetes.io/projected/7305176e-a416-41e7-8a1c-a92169a8a882-kube-api-access-qqvw5\") pod \"service-ca-9c57cc56f-2h8wl\" (UID: \"7305176e-a416-41e7-8a1c-a92169a8a882\") " pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.969518 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bcf\" (UniqueName: \"kubernetes.io/projected/ba42cc5b-5809-49fb-a5e9-d9aaa39fe272-kube-api-access-z8bcf\") pod \"machine-config-server-bvddw\" (UID: \"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272\") " pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.970102 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.979020 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.994838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" Feb 16 22:48:22 crc kubenswrapper[4865]: I0216 22:48:22.996257 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.003152 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.012451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.024921 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.036740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.037157 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.537142512 +0000 UTC m=+143.860849473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.045076 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.069137 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.090430 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bvddw" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.109032 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t"] Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.137917 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.138247 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.638218743 +0000 UTC m=+143.961925704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.139522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.140130 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.640113676 +0000 UTC m=+143.963820637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: W0216 22:48:23.164791 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47dbdc59_3676_456b_b077_8943ea216379.slice/crio-b8f6633b8ab71be99430d43dc1217f73a54b9518ffcd3b3fedcfc3208798772d WatchSource:0}: Error finding container b8f6633b8ab71be99430d43dc1217f73a54b9518ffcd3b3fedcfc3208798772d: Status 404 returned error can't find the container with id b8f6633b8ab71be99430d43dc1217f73a54b9518ffcd3b3fedcfc3208798772d Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.241107 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.241680 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.741646299 +0000 UTC m=+144.065353260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.344099 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.344580 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.844555301 +0000 UTC m=+144.168262262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.350785 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k"] Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.425812 4865 csr.go:261] certificate signing request csr-tw6ts is approved, waiting to be issued Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.435226 4865 csr.go:257] certificate signing request csr-tw6ts is issued Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.448894 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.449477 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:23.94945534 +0000 UTC m=+144.273162301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.477074 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" event={"ID":"679f6150-3ecb-437d-81bc-9877ad5c3cc4","Type":"ContainerStarted","Data":"79362843e111cb41fa3710e0f9a933b24630d83360b4477377be3d7f8516177e"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.491057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" event={"ID":"046ad19a-b9d5-4b51-ab15-dd974861f490","Type":"ContainerStarted","Data":"ce4a0bb1c12b86b01b11961bf92871a3048c9eed788159ca35197d366907509f"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.545711 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" event={"ID":"fe97f069-4baa-4f14-a478-75fe5cccefa0","Type":"ContainerStarted","Data":"c02b82b161a5f1a35a179cf8e8e7475bfafe166faf83236d5bd1116f6de18ad4"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.545785 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" event={"ID":"fe97f069-4baa-4f14-a478-75fe5cccefa0","Type":"ContainerStarted","Data":"91d99cc8101c409a27beacf429ba40a60f0e4f8fe631d647590c10982e919401"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.551091 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.551469 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.051454856 +0000 UTC m=+144.375161817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.571191 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" event={"ID":"47dbdc59-3676-456b-b077-8943ea216379","Type":"ContainerStarted","Data":"b8f6633b8ab71be99430d43dc1217f73a54b9518ffcd3b3fedcfc3208798772d"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.618388 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" event={"ID":"d673e2eb-0082-4b86-91c5-5d83376c2275","Type":"ContainerStarted","Data":"56f59914e7d91d079142a0215984e4981858647db25a3c950441217202ae8d69"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.618577 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.628867 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" event={"ID":"a0e99571-69de-43a8-9136-94d455e348c7","Type":"ContainerStarted","Data":"0a8602747095f7c26fcd578a6cec23722842e69d880ff4411969c3cb26cac6f9"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.633802 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d2vf7" event={"ID":"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5","Type":"ContainerStarted","Data":"5bb3371be517167746c8d841a9e33e80f39b5fce041183b8fc845dfbd6579695"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.637551 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" event={"ID":"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97","Type":"ContainerStarted","Data":"ba60072914d39cadc8ce0f002fe717666c93444d5943c766c9893c54d8c23d4a"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.643163 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" event={"ID":"2e7341df-56ba-4c50-8ddf-68a54b11340b","Type":"ContainerStarted","Data":"1596ac55a87bb45815122d649f0af58a7b596be9f3204f7caeca1fc1d9222f69"} Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.650458 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.651950 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.652259 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.152238348 +0000 UTC m=+144.475945309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.755513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.759430 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.25941237 +0000 UTC m=+144.583119331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.787010 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-675st"] Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.789648 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fz5qr"] Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.859799 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.862760 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.362736194 +0000 UTC m=+144.686443155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.871137 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.871584 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.371569913 +0000 UTC m=+144.695276864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:23 crc kubenswrapper[4865]: I0216 22:48:23.974852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:23 crc kubenswrapper[4865]: E0216 22:48:23.975936 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.475912966 +0000 UTC m=+144.799619927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.077188 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.077610 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.577594513 +0000 UTC m=+144.901301464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.179943 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.180499 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.680479315 +0000 UTC m=+145.004186276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.282139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.282482 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.782469191 +0000 UTC m=+145.106176152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.383987 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.384313 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.884297812 +0000 UTC m=+145.208004773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.436249 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 22:43:23 +0000 UTC, rotation deadline is 2026-12-12 20:45:28.325158164 +0000 UTC Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.436516 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7173h57m3.888646269s for next certificate rotation Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.481143 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-b5kw7" podStartSLOduration=124.481115113 podStartE2EDuration="2m4.481115113s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.480954508 +0000 UTC m=+144.804661469" watchObservedRunningTime="2026-02-16 22:48:24.481115113 +0000 UTC m=+144.804822074" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.486205 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.486722 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:24.98670232 +0000 UTC m=+145.310409271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.529990 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-669dd" podStartSLOduration=124.52996655 podStartE2EDuration="2m4.52996655s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.528684154 +0000 UTC m=+144.852391115" watchObservedRunningTime="2026-02-16 22:48:24.52996655 +0000 UTC m=+144.853673501" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.567399 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-676jr" podStartSLOduration=124.567377255 podStartE2EDuration="2m4.567377255s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.565716998 +0000 UTC m=+144.889423959" watchObservedRunningTime="2026-02-16 22:48:24.567377255 +0000 UTC m=+144.891084216" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.587932 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.588540 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.088516751 +0000 UTC m=+145.412223712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.646432 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bhtg8" podStartSLOduration=124.646408674 podStartE2EDuration="2m4.646408674s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.643561014 +0000 UTC m=+144.967267975" watchObservedRunningTime="2026-02-16 22:48:24.646408674 +0000 UTC m=+144.970115635" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.677942 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-675st" event={"ID":"25c0299c-b4a2-4c82-881f-808b610fb325","Type":"ContainerStarted","Data":"9436d27baf44397fd917070ac16ce796acfa207c595aa33d2a0e37ccf8edbe36"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.681015 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" event={"ID":"046ad19a-b9d5-4b51-ab15-dd974861f490","Type":"ContainerStarted","Data":"41544bbbeaaba9d2230a8b57e999bac21511101c31ea0034179ab1d9d59f229a"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.681057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" event={"ID":"046ad19a-b9d5-4b51-ab15-dd974861f490","Type":"ContainerStarted","Data":"bfd9a768ee1f15bd10df433f74131533d7ca9638c6daa6512a354a0c82b82973"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.693514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.693900 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.193886283 +0000 UTC m=+145.517593254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.702523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" event={"ID":"679f6150-3ecb-437d-81bc-9877ad5c3cc4","Type":"ContainerStarted","Data":"20d921add1e6a703f02906c61d9d58448163541e3f1feefa9f4613e6a9d1bc69"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.703218 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.713912 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tztwm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.713982 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.732908 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8sd85"] Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.735235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" event={"ID":"b00a412f-f2d5-4ad5-a9e5-67efc9e40682","Type":"ContainerStarted","Data":"5bf35089d17784d06069468e9a5a719bd6431acfe343f2d614ce3330abbcfdb6"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.760067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fz5qr" event={"ID":"572227eb-5d34-4fef-a0ea-f01abc14b398","Type":"ContainerStarted","Data":"29a103405ecd1f282f3a38e8b68cec2d27918a49cf1c4bd0b1f479e28cb08072"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.770970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fz5qr" event={"ID":"572227eb-5d34-4fef-a0ea-f01abc14b398","Type":"ContainerStarted","Data":"9177dbf3a6d9ee45a06b966c7f0fbe0c76c64de5ef14549001143530175ea54b"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.778624 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" podStartSLOduration=124.778597322 podStartE2EDuration="2m4.778597322s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.777616834 +0000 UTC m=+145.101323795" watchObservedRunningTime="2026-02-16 22:48:24.778597322 +0000 UTC m=+145.102304283" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.801037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.803003 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.302954999 +0000 UTC m=+145.626661960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.807236 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" event={"ID":"5cfba6b6-3d1e-49d9-902e-b3493e1ffc97","Type":"ContainerStarted","Data":"6aa27301e28b44d35eed9f80d09452060903441d9fcd9967564d91cff67c3f59"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.812601 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.814080 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.314062062 +0000 UTC m=+145.637769013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.834592 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5m56v" podStartSLOduration=124.83455381 podStartE2EDuration="2m4.83455381s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.813800404 +0000 UTC m=+145.137507355" watchObservedRunningTime="2026-02-16 22:48:24.83455381 +0000 UTC m=+145.158260771" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.837041 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" event={"ID":"2e7341df-56ba-4c50-8ddf-68a54b11340b","Type":"ContainerStarted","Data":"c6f3bd4f9368dd595658b6825555339516d7781c5bc224e7c39b95bfae08d2b3"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.857309 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" podStartSLOduration=124.85726481 podStartE2EDuration="2m4.85726481s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.856703294 +0000 UTC m=+145.180410265" watchObservedRunningTime="2026-02-16 22:48:24.85726481 +0000 UTC m=+145.180971781" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.860542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" event={"ID":"a0e99571-69de-43a8-9136-94d455e348c7","Type":"ContainerStarted","Data":"f49af166720c7b48b1e4ecb8b51cb838e7d3389dc30289a45ce22a74be6cead8"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.871768 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" event={"ID":"526bdcb9-a06a-4d6a-9793-078e20245455","Type":"ContainerStarted","Data":"4a2b8d70fbbc2c353add48b90929900225211488ec81538402d48b94d1d65565"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.871817 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" event={"ID":"526bdcb9-a06a-4d6a-9793-078e20245455","Type":"ContainerStarted","Data":"25c5dcf506096f30573e76336511209eb72f0550e1eea897e93e42e8cce9494c"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.875192 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.885164 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvddw" event={"ID":"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272","Type":"ContainerStarted","Data":"b9eeb22d0266ad6ea8df88ee81e295b79d74b0d7a1acaedbabcb11146808a865"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.885214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bvddw" event={"ID":"ba42cc5b-5809-49fb-a5e9-d9aaa39fe272","Type":"ContainerStarted","Data":"adff70b4b9558b3f0ab75c75edf3398a6fbcfdaccc8a5a8fed6243ad5cfa8480"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.890792 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d2vf7" event={"ID":"ae87db3d-c73d-4e96-8cda-ca3ac846f9b5","Type":"ContainerStarted","Data":"0706b2a0b9c0692b113a2acd3aad5c49ebf775673e65e2426608f84e7befda34"} Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.914990 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vgbjc" podStartSLOduration=124.914962097 podStartE2EDuration="2m4.914962097s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.912248651 +0000 UTC m=+145.235955612" watchObservedRunningTime="2026-02-16 22:48:24.914962097 +0000 UTC m=+145.238669058" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.919531 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kqnz8" podStartSLOduration=124.919508995 podStartE2EDuration="2m4.919508995s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.883816589 +0000 UTC m=+145.207523550" watchObservedRunningTime="2026-02-16 22:48:24.919508995 +0000 UTC m=+145.243215956" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.921707 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:24 crc kubenswrapper[4865]: E0216 22:48:24.923703 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.423659002 +0000 UTC m=+145.747365963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.947695 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-f6stv" podStartSLOduration=124.94767229 podStartE2EDuration="2m4.94767229s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.946769634 +0000 UTC m=+145.270476595" watchObservedRunningTime="2026-02-16 22:48:24.94767229 +0000 UTC m=+145.271379251" Feb 16 22:48:24 crc kubenswrapper[4865]: I0216 22:48:24.992565 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t7gvl" podStartSLOduration=124.992548095 podStartE2EDuration="2m4.992548095s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:24.988776179 +0000 UTC m=+145.312483130" watchObservedRunningTime="2026-02-16 22:48:24.992548095 +0000 UTC m=+145.316255056" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.028328 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.028748 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qcl4t" podStartSLOduration=125.028726125 podStartE2EDuration="2m5.028726125s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.025364851 +0000 UTC m=+145.349071812" watchObservedRunningTime="2026-02-16 22:48:25.028726125 +0000 UTC m=+145.352433086" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.033857 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.5338409 +0000 UTC m=+145.857547861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.067102 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqspk" podStartSLOduration=125.067073667 podStartE2EDuration="2m5.067073667s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.065990526 +0000 UTC m=+145.389697497" watchObservedRunningTime="2026-02-16 22:48:25.067073667 +0000 UTC m=+145.390780628" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.123476 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" podStartSLOduration=125.123453137 podStartE2EDuration="2m5.123453137s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.120604536 +0000 UTC m=+145.444311507" watchObservedRunningTime="2026-02-16 22:48:25.123453137 +0000 UTC m=+145.447160098" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.130100 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.130535 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.630510786 +0000 UTC m=+145.954217747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.130583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.131082 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.631064331 +0000 UTC m=+145.954771292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.153367 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-z5w9k container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.153734 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" podUID="526bdcb9-a06a-4d6a-9793-078e20245455" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.154331 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-d2vf7" podStartSLOduration=125.154309607 podStartE2EDuration="2m5.154309607s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.145142898 +0000 UTC m=+145.468849859" watchObservedRunningTime="2026-02-16 22:48:25.154309607 +0000 UTC m=+145.478016568" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.223614 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" podStartSLOduration=125.22358927 podStartE2EDuration="2m5.22358927s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.222819938 +0000 UTC m=+145.546526899" watchObservedRunningTime="2026-02-16 22:48:25.22358927 +0000 UTC m=+145.547296231" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.234710 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.237110 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.737082 +0000 UTC m=+146.060788951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.237523 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.241611 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.741565216 +0000 UTC m=+146.065272187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.341504 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8tpfm" podStartSLOduration=125.341485164 podStartE2EDuration="2m5.341485164s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.271809489 +0000 UTC m=+145.595516440" watchObservedRunningTime="2026-02-16 22:48:25.341485164 +0000 UTC m=+145.665192115" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.352778 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.353178 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.853163084 +0000 UTC m=+146.176870045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.397938 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" podStartSLOduration=125.397910255 podStartE2EDuration="2m5.397910255s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.395851407 +0000 UTC m=+145.719558368" watchObservedRunningTime="2026-02-16 22:48:25.397910255 +0000 UTC m=+145.721617216" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.398763 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" podStartSLOduration=125.398755279 podStartE2EDuration="2m5.398755279s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.341822184 +0000 UTC m=+145.665529145" watchObservedRunningTime="2026-02-16 22:48:25.398755279 +0000 UTC m=+145.722462240" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.465394 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.465897 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:25.965881132 +0000 UTC m=+146.289588093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.480389 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fz5qr" podStartSLOduration=6.478250331 podStartE2EDuration="6.478250331s" podCreationTimestamp="2026-02-16 22:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.463957128 +0000 UTC m=+145.787664089" watchObservedRunningTime="2026-02-16 22:48:25.478250331 +0000 UTC m=+145.801957302" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.481808 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.551713 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bvddw" podStartSLOduration=5.551696332 podStartE2EDuration="5.551696332s" podCreationTimestamp="2026-02-16 22:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:25.54560174 +0000 UTC m=+145.869308701" watchObservedRunningTime="2026-02-16 22:48:25.551696332 +0000 UTC m=+145.875403283" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.566363 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.568743 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.068689961 +0000 UTC m=+146.392396922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.572875 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.580424 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.080390931 +0000 UTC m=+146.404097892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.593742 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.606392 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:25 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:25 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:25 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.606472 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.656641 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn"] Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.675538 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.676014 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.175992817 +0000 UTC m=+146.499699778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.704633 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t624g"] Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.734332 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b76ph"] Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.778353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.778974 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.278956501 +0000 UTC m=+146.602663462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.784424 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k58w4"] Feb 16 22:48:25 crc kubenswrapper[4865]: W0216 22:48:25.834876 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715cd16c_2512_4885_ae7b_437fd61fcea2.slice/crio-4cc3830dab853e6057d20694bd9967f6ea0edb1baf8971fb7124e4ecf1f3d34e WatchSource:0}: Error finding container 4cc3830dab853e6057d20694bd9967f6ea0edb1baf8971fb7124e4ecf1f3d34e: Status 404 returned error can't find the container with id 4cc3830dab853e6057d20694bd9967f6ea0edb1baf8971fb7124e4ecf1f3d34e Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.856790 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb"] Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.883567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.884109 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.384071275 +0000 UTC m=+146.707778236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.884309 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.884674 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.384667122 +0000 UTC m=+146.708374083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.966813 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" event={"ID":"47dbdc59-3676-456b-b077-8943ea216379","Type":"ContainerStarted","Data":"376cf67fbda9defae797bf67ae1cc1692ed67bc81f1145f790e9a32386b71974"} Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.990888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" event={"ID":"29de9344-e059-4080-8c9f-9d07027204f7","Type":"ContainerStarted","Data":"b9852483a9bfc9bed32a979ac76528785da25f0d545c51140fb842c0e9e92f23"} Feb 16 22:48:25 crc kubenswrapper[4865]: I0216 22:48:25.996025 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:25 crc kubenswrapper[4865]: E0216 22:48:25.996419 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.496397223 +0000 UTC m=+146.820104184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.002676 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.003134 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.026589 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-675st" event={"ID":"25c0299c-b4a2-4c82-881f-808b610fb325","Type":"ContainerStarted","Data":"6923bced2635912f8ffa4f0386b434c6be6629937d146cda4c448c6f4806aca0"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.027642 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.033007 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.037232 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ls5tt" podStartSLOduration=126.037200373 podStartE2EDuration="2m6.037200373s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:26.017645482 +0000 UTC m=+146.341352443" watchObservedRunningTime="2026-02-16 22:48:26.037200373 +0000 UTC m=+146.360907354" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.042164 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" event={"ID":"48530ee9-1daf-46ac-96d9-3439330122c7","Type":"ContainerStarted","Data":"605564caf529f52b8e36ee186d5b8fcb59f1c82a72c14e7f96c3897d73483449"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.056442 4865 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-675st container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.056510 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-675st" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.062738 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.066417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t624g" event={"ID":"852722bc-cd18-4344-b8d1-a01be5c6ea33","Type":"ContainerStarted","Data":"ea02dc0f2612bd0142d2329a20dd78f108d76bdc409b9e2e475c044c68505151"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.072511 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.074450 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" event={"ID":"715cd16c-2512-4885-ae7b-437fd61fcea2","Type":"ContainerStarted","Data":"4cc3830dab853e6057d20694bd9967f6ea0edb1baf8971fb7124e4ecf1f3d34e"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.098798 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.101927 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerStarted","Data":"0cc89cad6edde4863d59459d260f48530f4f7fcccecfdb1c340b4cc1b0215fb9"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.103047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.109396 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xrmqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.109468 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.112494 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.113145 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.113238 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.613202787 +0000 UTC m=+146.936909748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.116783 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-675st" podStartSLOduration=126.10941101 podStartE2EDuration="2m6.10941101s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:26.097199635 +0000 UTC m=+146.420906616" watchObservedRunningTime="2026-02-16 22:48:26.10941101 +0000 UTC m=+146.433117971" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.151618 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8sd85" event={"ID":"ca74042e-8e33-4ee4-aaa9-57fe06d4c710","Type":"ContainerStarted","Data":"023b54a9304f05d843a775b053cd8f7a860665c9f346e4cda3805237d074d377"} Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.187074 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-z5w9k" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.189749 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.190825 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" podStartSLOduration=126.190810775 podStartE2EDuration="2m6.190810775s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:26.188329005 +0000 UTC m=+146.512035966" watchObservedRunningTime="2026-02-16 22:48:26.190810775 +0000 UTC m=+146.514517736" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.194299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.218080 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.218220 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.718189097 +0000 UTC m=+147.041896058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.218381 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.220965 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.720943445 +0000 UTC m=+147.044650406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.339974 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.340344 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.840331552 +0000 UTC m=+147.164038513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.349704 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.399132 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.400873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.447091 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.447616 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:26.947599717 +0000 UTC m=+147.271306678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.494347 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2h8wl"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.494397 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.494413 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v8bqq"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.501579 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8lph5"] Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.510655 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x"] Feb 16 22:48:26 crc kubenswrapper[4865]: W0216 22:48:26.518367 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7305176e_a416_41e7_8a1c_a92169a8a882.slice/crio-4e926608d46155551cf73b3f8335003556fcebb5aba5ea939cadbba82917f04f WatchSource:0}: Error finding container 4e926608d46155551cf73b3f8335003556fcebb5aba5ea939cadbba82917f04f: Status 404 returned error can't find the container with id 4e926608d46155551cf73b3f8335003556fcebb5aba5ea939cadbba82917f04f Feb 16 22:48:26 crc kubenswrapper[4865]: W0216 22:48:26.551082 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe88c107_f404_4d93_b59e_471220c045ec.slice/crio-887a38efbb9a472470386eccf9baaf883eaf70205edaa760368e8b40f9f773c7 WatchSource:0}: Error finding container 887a38efbb9a472470386eccf9baaf883eaf70205edaa760368e8b40f9f773c7: Status 404 returned error can't find the container with id 887a38efbb9a472470386eccf9baaf883eaf70205edaa760368e8b40f9f773c7 Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.552849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.553351 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.053331798 +0000 UTC m=+147.377038759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: W0216 22:48:26.560517 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a7e06e_2a57_4c98_9cf9_36c417623bb7.slice/crio-6fe099e400d7ff5d25ce5f866fb4d738dce5209f9ab040e29cfa69c68c200852 WatchSource:0}: Error finding container 6fe099e400d7ff5d25ce5f866fb4d738dce5209f9ab040e29cfa69c68c200852: Status 404 returned error can't find the container with id 6fe099e400d7ff5d25ce5f866fb4d738dce5209f9ab040e29cfa69c68c200852 Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.613354 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:26 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:26 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:26 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.613413 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.656541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.657099 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.157060474 +0000 UTC m=+147.480767435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.771932 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.772271 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.272249012 +0000 UTC m=+147.595955973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.873833 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xcchk" Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.875589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.875934 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.375921515 +0000 UTC m=+147.699628476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:26 crc kubenswrapper[4865]: I0216 22:48:26.976497 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:26 crc kubenswrapper[4865]: E0216 22:48:26.978220 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.478184879 +0000 UTC m=+147.801891830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.079011 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.079463 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.579448265 +0000 UTC m=+147.903155226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.165134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t624g" event={"ID":"852722bc-cd18-4344-b8d1-a01be5c6ea33","Type":"ContainerStarted","Data":"ebcd0f3007ebac28f4b12632f546ded3f74a31ba63bd4c1fd92b9d7912fd9424"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.168391 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.171614 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-t624g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.171661 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t624g" podUID="852722bc-cd18-4344-b8d1-a01be5c6ea33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.172895 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" event={"ID":"715cd16c-2512-4885-ae7b-437fd61fcea2","Type":"ContainerStarted","Data":"433369b013c068169e81a7ceb43c71a3d6232df414d21fe0382dd49eb546f690"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.180672 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.184521 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.684496257 +0000 UTC m=+148.008203218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.185073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.186736 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.68672087 +0000 UTC m=+148.010427831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.193188 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t624g" podStartSLOduration=127.193163362 podStartE2EDuration="2m7.193163362s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.190791135 +0000 UTC m=+147.514498096" watchObservedRunningTime="2026-02-16 22:48:27.193163362 +0000 UTC m=+147.516870323" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.200487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" event={"ID":"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12","Type":"ContainerStarted","Data":"8f41b8c48cb1b69814b926bd077a817b2708cafe8d8f6c2049ad87d12a7dff67"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.200557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" event={"ID":"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12","Type":"ContainerStarted","Data":"dc014dfd124ac712f9d551efcd636254185d6aa69c1d635fa5f721eada9e4b96"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.213452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" event={"ID":"09255158-b7b9-4a34-8fe3-0b7864ec5f11","Type":"ContainerStarted","Data":"56eeac3e8ea6b52274d237f2ac9f026a407f1350b15de0dd0bbd4f5806cb6264"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.236211 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" event={"ID":"596fa99e-76fb-4442-ae6d-7dc7e632f377","Type":"ContainerStarted","Data":"008ce5905cdfdf3b1e1bdea5608391fe3ea7560222077ef3c658093c24c9dd49"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.236289 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" event={"ID":"596fa99e-76fb-4442-ae6d-7dc7e632f377","Type":"ContainerStarted","Data":"71027235023b306a07f37d655e758969f504a5fba6b5c2f952032bc0bfb670b1"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.244636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" event={"ID":"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba","Type":"ContainerStarted","Data":"4de77536a9f94c3b63c4427e946c8f50cbd5d8a93b559eb277b18d54d2efea8c"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.244681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" event={"ID":"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba","Type":"ContainerStarted","Data":"ee74a251174e017ca2b216deae3321ea1ba0aa0baf1af3ba1d9c2f4d773ddae7"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.263171 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerStarted","Data":"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.263719 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xrmqm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.263769 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.270222 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" event={"ID":"46a7e06e-2a57-4c98-9cf9-36c417623bb7","Type":"ContainerStarted","Data":"6fe099e400d7ff5d25ce5f866fb4d738dce5209f9ab040e29cfa69c68c200852"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.280869 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" event={"ID":"67ee4300-52d2-45bc-8420-9045db672f41","Type":"ContainerStarted","Data":"1d166a59f515c54cb80964a8dfc6220c8751ffa63113b174b9e98e46bbf996e4"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.293224 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.293392 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.793357347 +0000 UTC m=+148.117064298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.293696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.294313 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.794259943 +0000 UTC m=+148.117967084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.326927 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" event={"ID":"48530ee9-1daf-46ac-96d9-3439330122c7","Type":"ContainerStarted","Data":"31abff1badbbc66bb6e937e93ccf827b45d72e414ef29310113c17dca95063fe"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.326985 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" event={"ID":"48530ee9-1daf-46ac-96d9-3439330122c7","Type":"ContainerStarted","Data":"3fee73dde7d58672ae52a0d42164ff9261e8cc416a8112814360d70ca4a408dc"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.347660 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8l2mn" podStartSLOduration=127.347635788 podStartE2EDuration="2m7.347635788s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.346171347 +0000 UTC m=+147.669878308" watchObservedRunningTime="2026-02-16 22:48:27.347635788 +0000 UTC m=+147.671342749" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.348487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" event={"ID":"568e629f-a067-47f4-a6e9-ca18ca03e582","Type":"ContainerStarted","Data":"0e4d4859717e9f2e262ba94f69c1d3f4f31b707dc15d973ce22d52e41bfc6f83"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.353682 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hmgzd" podStartSLOduration=127.353654748 podStartE2EDuration="2m7.353654748s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.260617004 +0000 UTC m=+147.584324165" watchObservedRunningTime="2026-02-16 22:48:27.353654748 +0000 UTC m=+147.677361709" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.367588 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8sd85" event={"ID":"ca74042e-8e33-4ee4-aaa9-57fe06d4c710","Type":"ContainerStarted","Data":"b80c984ebb3a27b5e17a7ffcb8a12f8f9510e4e2e3a86ae27d1604444ded0152"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.367637 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8sd85" event={"ID":"ca74042e-8e33-4ee4-aaa9-57fe06d4c710","Type":"ContainerStarted","Data":"ba24d3b1cbec89c4d23eb1082e4d4f18fef4e4737415a1829284df7e1b3b9972"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.378859 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.381045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" event={"ID":"f6b57ccb-043a-40ad-b510-2ed6b3683c97","Type":"ContainerStarted","Data":"2726f3ad5b158860420be6e66cd1416040e46ddb9c6fde90ce5aa14920376116"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.381108 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" event={"ID":"f6b57ccb-043a-40ad-b510-2ed6b3683c97","Type":"ContainerStarted","Data":"340cfb0452c3a99812a68e6e47a8c755c95d42f9793185e309552cc4b0987997"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.381518 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.384060 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zfxvb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.384113 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" podUID="f6b57ccb-043a-40ad-b510-2ed6b3683c97" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.398312 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.399756 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:27.899734457 +0000 UTC m=+148.223441418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.401535 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" event={"ID":"be88c107-f404-4d93-b59e-471220c045ec","Type":"ContainerStarted","Data":"887a38efbb9a472470386eccf9baaf883eaf70205edaa760368e8b40f9f773c7"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.407038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" event={"ID":"ea816f0c-3205-4862-ace4-8caf8bc8d7a9","Type":"ContainerStarted","Data":"1f6d15f49c81803e739897ddf06864138b908acfd472f589b8b29f747d4b8ad7"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.407098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" event={"ID":"ea816f0c-3205-4862-ace4-8caf8bc8d7a9","Type":"ContainerStarted","Data":"942b4d80c2ec77b002c709186195e305b4b1b7a1d5ad1c395caf4c189e4ef6e5"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.422800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" event={"ID":"7305176e-a416-41e7-8a1c-a92169a8a882","Type":"ContainerStarted","Data":"b41861b254843eb2221418bc846c3e9a5e9c6f1743455763e26d41f82f364943"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.422850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" event={"ID":"7305176e-a416-41e7-8a1c-a92169a8a882","Type":"ContainerStarted","Data":"4e926608d46155551cf73b3f8335003556fcebb5aba5ea939cadbba82917f04f"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.425684 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" event={"ID":"fcbe8037-e83d-4581-9759-a7accf45937a","Type":"ContainerStarted","Data":"cfe1044b001c49d2d5aba53b28719301530ff2834745d231c450f1274fbfcc43"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.425754 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" event={"ID":"fcbe8037-e83d-4581-9759-a7accf45937a","Type":"ContainerStarted","Data":"47d5d0943cc5ffc3fc7c638a5b1a0580add5c2d52d45d2cd385322ef41681799"} Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.425864 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.432778 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8dbg" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.441398 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6zkrs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.441464 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" podUID="fcbe8037-e83d-4581-9759-a7accf45937a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.445001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.484790 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8sd85" podStartSLOduration=8.484767175 podStartE2EDuration="8.484767175s" podCreationTimestamp="2026-02-16 22:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.419528705 +0000 UTC m=+147.743235656" watchObservedRunningTime="2026-02-16 22:48:27.484767175 +0000 UTC m=+147.808474136" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.484983 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" podStartSLOduration=127.484979701 podStartE2EDuration="2m7.484979701s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.480642629 +0000 UTC m=+147.804349590" watchObservedRunningTime="2026-02-16 22:48:27.484979701 +0000 UTC m=+147.808686662" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.495079 4865 patch_prober.go:28] interesting pod/apiserver-76f77b778f-89cf9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]log ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]etcd ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/max-in-flight-filter ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 22:48:27 crc kubenswrapper[4865]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 22:48:27 crc kubenswrapper[4865]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/openshift.io-startinformers ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 22:48:27 crc kubenswrapper[4865]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 22:48:27 crc kubenswrapper[4865]: livez check failed Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.495203 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" podUID="a0e99571-69de-43a8-9136-94d455e348c7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.500493 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.502878 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.002862655 +0000 UTC m=+148.326569616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.532882 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" podStartSLOduration=127.532856981 podStartE2EDuration="2m7.532856981s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.529685782 +0000 UTC m=+147.853392743" watchObservedRunningTime="2026-02-16 22:48:27.532856981 +0000 UTC m=+147.856563942" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.546546 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.548468 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.553372 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.565115 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.604470 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:27 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:27 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:27 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.604540 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.604768 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.605918 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.605955 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.606009 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmst\" (UniqueName: \"kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.606145 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.106108217 +0000 UTC m=+148.429815178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.659690 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2h8wl" podStartSLOduration=127.659664417 podStartE2EDuration="2m7.659664417s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:27.626858832 +0000 UTC m=+147.950565783" watchObservedRunningTime="2026-02-16 22:48:27.659664417 +0000 UTC m=+147.983371378" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.713499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.713560 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.713604 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.713631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmst\" (UniqueName: \"kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.714581 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.714852 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.715163 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.215151292 +0000 UTC m=+148.538858253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.724458 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.725655 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.733849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.743474 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.754298 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmst\" (UniqueName: \"kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst\") pod \"certified-operators-d2l24\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.816190 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.816623 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.816673 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.816740 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kbr\" (UniqueName: \"kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.816878 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.31685328 +0000 UTC m=+148.640560241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.892027 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.920005 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.920133 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.920195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kbr\" (UniqueName: \"kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.920318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.921337 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.921590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:27 crc kubenswrapper[4865]: E0216 22:48:27.921950 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.421922243 +0000 UTC m=+148.745629204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.923339 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.924828 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.946245 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:48:27 crc kubenswrapper[4865]: I0216 22:48:27.964453 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kbr\" (UniqueName: \"kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr\") pod \"community-operators-s2z5v\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.021439 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.021626 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.521581033 +0000 UTC m=+148.845287994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.021879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.021916 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkf99\" (UniqueName: \"kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.021948 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.021981 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.022502 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.522485218 +0000 UTC m=+148.846192179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.051452 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.126217 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.127150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.127325 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.127355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkf99\" (UniqueName: \"kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.127936 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.627911652 +0000 UTC m=+148.951618613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.128487 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.129065 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.129248 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.130392 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.150066 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.173423 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkf99\" (UniqueName: \"kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99\") pod \"certified-operators-r57lx\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.229569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.229628 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.229662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.229828 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6p4\" (UniqueName: \"kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.230113 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.730096603 +0000 UTC m=+149.053803564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.243939 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.297239 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.338411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.338810 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.838757858 +0000 UTC m=+149.162464819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339122 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339165 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339226 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6p4\" (UniqueName: \"kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339288 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339419 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.339861 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.839836898 +0000 UTC m=+149.163543859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.339871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.340024 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.340873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.346530 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.370445 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6p4\" (UniqueName: \"kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4\") pod \"community-operators-kk986\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.436073 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.441007 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.441443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.441499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.442345 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:28.942318348 +0000 UTC m=+149.266025319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.444797 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.445004 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.446821 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.451526 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.467717 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerStarted","Data":"62c84098b2180eeec3445df4208d78bb46844717a91aef4add99b0a58960783f"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.485531 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" event={"ID":"be88c107-f404-4d93-b59e-471220c045ec","Type":"ContainerStarted","Data":"1a2a8f04e7efee7287ed9517a60391a4a5a8512b25cf496baca863a012501720"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.486857 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.493704 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-v8bqq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.493766 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" podUID="be88c107-f404-4d93-b59e-471220c045ec" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.497880 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" event={"ID":"29de9344-e059-4080-8c9f-9d07027204f7","Type":"ContainerStarted","Data":"4071c8aec333aa15344d6bba1b43562119d669c6af4d16a78c7f974b9f9c7065"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.514381 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" event={"ID":"ea816f0c-3205-4862-ace4-8caf8bc8d7a9","Type":"ContainerStarted","Data":"1fdfd5b2b04f94f9e3057e9841d1c8768823e0312d95a839cd5ee978daff1d87"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.515209 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.524669 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.540561 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.540804 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" event={"ID":"46a7e06e-2a57-4c98-9cf9-36c417623bb7","Type":"ContainerStarted","Data":"cf892cbc89cdffbccf872aa7b4ee38f7be5a84efbacb328a2841fb8a8434c9cc"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.540862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" event={"ID":"46a7e06e-2a57-4c98-9cf9-36c417623bb7","Type":"ContainerStarted","Data":"8b86cf1f095aea245d91bf0b0877b8fa36c5f64976136f6e0e33168474951157"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.541011 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" podStartSLOduration=128.540988751 podStartE2EDuration="2m8.540988751s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.527986774 +0000 UTC m=+148.851693735" watchObservedRunningTime="2026-02-16 22:48:28.540988751 +0000 UTC m=+148.864695712" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.545621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.546677 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.04665994 +0000 UTC m=+149.370366901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.569637 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" event={"ID":"67ee4300-52d2-45bc-8420-9045db672f41","Type":"ContainerStarted","Data":"2f5c2b0d5e1f838e460763ffdc452e422d33756be3b13ab53d3c90a10140441f"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.586487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" event={"ID":"715cd16c-2512-4885-ae7b-437fd61fcea2","Type":"ContainerStarted","Data":"ccf51db6e02db2ff5937f60c5e8c53c922d253e40c6e057968c7b9fff54218e8"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.600627 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" event={"ID":"f9e1b6e4-4f56-4dc1-a1fb-6040e6e34b12","Type":"ContainerStarted","Data":"36c5f63e447ca42033ff017aeb249e1553cbc09f691e10ed008e20f26bd55159"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.601934 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:28 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:28 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:28 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.602068 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.602691 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8lph5" podStartSLOduration=128.60266981 podStartE2EDuration="2m8.60266981s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.601669052 +0000 UTC m=+148.925376013" watchObservedRunningTime="2026-02-16 22:48:28.60266981 +0000 UTC m=+148.926376771" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.603514 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" podStartSLOduration=128.603508664 podStartE2EDuration="2m8.603508664s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.558234027 +0000 UTC m=+148.881940988" watchObservedRunningTime="2026-02-16 22:48:28.603508664 +0000 UTC m=+148.927215625" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.647116 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.648695 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.148672237 +0000 UTC m=+149.472379198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.650941 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" event={"ID":"09255158-b7b9-4a34-8fe3-0b7864ec5f11","Type":"ContainerStarted","Data":"a839fcf63e1517d07b79c3fe167dae499504c8bc2bf7a9ffb2a875d7591048e7"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.660250 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mkpj" podStartSLOduration=128.660211053 podStartE2EDuration="2m8.660211053s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.652865855 +0000 UTC m=+148.976572826" watchObservedRunningTime="2026-02-16 22:48:28.660211053 +0000 UTC m=+148.983918014" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.745189 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" event={"ID":"568e629f-a067-47f4-a6e9-ca18ca03e582","Type":"ContainerStarted","Data":"6aaca8525a351a52e361b49ef11d2b2d0828da2bfc53bd54d48f1deb106c32d2"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.752693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.757045 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.257022332 +0000 UTC m=+149.580729293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.757741 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b76ph" podStartSLOduration=128.757711061 podStartE2EDuration="2m8.757711061s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.746417174 +0000 UTC m=+149.070124135" watchObservedRunningTime="2026-02-16 22:48:28.757711061 +0000 UTC m=+149.081418022" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.787573 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" podStartSLOduration=128.787551153 podStartE2EDuration="2m8.787551153s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.783324013 +0000 UTC m=+149.107030974" watchObservedRunningTime="2026-02-16 22:48:28.787551153 +0000 UTC m=+149.111258114" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.802272 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" event={"ID":"7d625868-f11c-4bcd-b1f0-dbe1d50d24ba","Type":"ContainerStarted","Data":"6e89d5300e1591b281093a639d6c5ee25458c14ac7548de8b4ec78fb68c95ee5"} Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.805204 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-t624g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.805295 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t624g" podUID="852722bc-cd18-4344-b8d1-a01be5c6ea33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.835271 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zfxvb" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.846937 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.857567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.858978 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.358945256 +0000 UTC m=+149.682652217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.873259 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qq9vk" podStartSLOduration=128.873235329 podStartE2EDuration="2m8.873235329s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.834747613 +0000 UTC m=+149.158454574" watchObservedRunningTime="2026-02-16 22:48:28.873235329 +0000 UTC m=+149.196942290" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.886072 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m696v" podStartSLOduration=128.88602929 podStartE2EDuration="2m8.88602929s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.872240411 +0000 UTC m=+149.195947372" watchObservedRunningTime="2026-02-16 22:48:28.88602929 +0000 UTC m=+149.209736241" Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.965440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:28 crc kubenswrapper[4865]: E0216 22:48:28.969004 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.468982169 +0000 UTC m=+149.792689130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:28 crc kubenswrapper[4865]: I0216 22:48:28.982222 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f7dqg" podStartSLOduration=128.982198392 podStartE2EDuration="2m8.982198392s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:28.90908074 +0000 UTC m=+149.232787691" watchObservedRunningTime="2026-02-16 22:48:28.982198392 +0000 UTC m=+149.305905353" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.037146 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.069884 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.070210 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.570193553 +0000 UTC m=+149.893900514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.176062 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.176583 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.676564503 +0000 UTC m=+150.000271464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.276925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.277420 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.777402796 +0000 UTC m=+150.101109757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.379013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.379721 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.879707401 +0000 UTC m=+150.203414362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.480970 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.481212 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.981181173 +0000 UTC m=+150.304888124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.481385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.482688 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:29.982671235 +0000 UTC m=+150.306378196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.537087 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.543133 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.548150 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.548657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.583948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.584340 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6f8\" (UniqueName: \"kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.584399 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.584484 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.584643 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.084612559 +0000 UTC m=+150.408319520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.602619 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:29 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:29 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:29 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.602687 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.686815 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm6f8\" (UniqueName: \"kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.686891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.686934 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.686965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.687752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.688536 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.688556 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.18853822 +0000 UTC m=+150.512245181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.693895 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.720613 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm6f8\" (UniqueName: \"kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8\") pod \"redhat-marketplace-pfrgn\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: W0216 22:48:29.779147 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-70377104fc4c46d155e2d0514476724bfd3ee62617cb3617d27a1de3d6b771f5 WatchSource:0}: Error finding container 70377104fc4c46d155e2d0514476724bfd3ee62617cb3617d27a1de3d6b771f5: Status 404 returned error can't find the container with id 70377104fc4c46d155e2d0514476724bfd3ee62617cb3617d27a1de3d6b771f5 Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.787910 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.788420 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.288399616 +0000 UTC m=+150.612106577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.794410 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6zkrs" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.810683 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70377104fc4c46d155e2d0514476724bfd3ee62617cb3617d27a1de3d6b771f5"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.836602 4865 generic.go:334] "Generic (PLEG): container finished" podID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerID="09ea651676573cba4d06506ef667c068fc0efca0a94abca3587377995422fd41" exitCode=0 Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.836778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerDied","Data":"09ea651676573cba4d06506ef667c068fc0efca0a94abca3587377995422fd41"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.836816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerStarted","Data":"433142a75966b8240cad702dbd6eee510178c2c9a6676280df26ec50367cfc37"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.862980 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.889089 4865 generic.go:334] "Generic (PLEG): container finished" podID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerID="51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60" exitCode=0 Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.889245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerDied","Data":"51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.889476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerStarted","Data":"11cbe6accc37a7e82a7effa8817560222487bfec3e6354cdafa897929c771b36"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.895961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:29 crc kubenswrapper[4865]: E0216 22:48:29.896715 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.39669654 +0000 UTC m=+150.720403501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.898422 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.903943 4865 generic.go:334] "Generic (PLEG): container finished" podID="67ee4300-52d2-45bc-8420-9045db672f41" containerID="2f5c2b0d5e1f838e460763ffdc452e422d33756be3b13ab53d3c90a10140441f" exitCode=0 Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.904230 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" event={"ID":"67ee4300-52d2-45bc-8420-9045db672f41","Type":"ContainerDied","Data":"2f5c2b0d5e1f838e460763ffdc452e422d33756be3b13ab53d3c90a10140441f"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.943799 4865 generic.go:334] "Generic (PLEG): container finished" podID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerID="7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe" exitCode=0 Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.943957 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerDied","Data":"7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe"} Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.948700 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.950170 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:29 crc kubenswrapper[4865]: I0216 22:48:29.966373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.004076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9d816ba79260efc18a311134b68efb790ecccdfa7312ce278179ed03bb1dbd43"} Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.004750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.005070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj7fv\" (UniqueName: \"kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.005467 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.005526 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.005638 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.505619932 +0000 UTC m=+150.829326893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.045193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" event={"ID":"29de9344-e059-4080-8c9f-9d07027204f7","Type":"ContainerStarted","Data":"1d6dbb29d47c8f274ceeb3c1ecb078099131739835a80e0e1d088c762b703075"} Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.053911 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerStarted","Data":"faa37072a814f3de54c35d0088cf655c9b45658a1775adb6fa19c29b852613d8"} Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.074987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b2089310736e4723498b8f0b7b2382cc47a1b11f21157f3535d9b1563dbc79c7"} Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.077035 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-t624g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.077077 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t624g" podUID="852722bc-cd18-4344-b8d1-a01be5c6ea33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.077035 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.108199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj7fv\" (UniqueName: \"kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.108349 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.108547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.108609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.110676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.111130 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.611111887 +0000 UTC m=+150.934818848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.114509 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.146240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj7fv\" (UniqueName: \"kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv\") pod \"redhat-marketplace-7kqlb\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.211822 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.212635 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.712619159 +0000 UTC m=+151.036326120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.231136 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-v8bqq" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.281605 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.313737 4865 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.322725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.323299 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.823260589 +0000 UTC m=+151.146967540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.386046 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.423808 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.424023 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.92398723 +0000 UTC m=+151.247694191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.424376 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.424727 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:30.92471003 +0000 UTC m=+151.248416991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.530726 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.531612 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 22:48:31.031582214 +0000 UTC m=+151.355289175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.596559 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:30 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:30 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:30 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.597565 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.635244 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: E0216 22:48:30.635567 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 22:48:31.135554506 +0000 UTC m=+151.459261467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-d87nj" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.642593 4865 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T22:48:30.313766362Z","Handler":null,"Name":""} Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.646041 4865 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.646077 4865 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.667974 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.722705 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.724162 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.727742 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.737706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.748458 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.749995 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.839299 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69spx\" (UniqueName: \"kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.839387 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.839429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.839534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.847259 4865 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.847354 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.891674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-d87nj\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.945071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69spx\" (UniqueName: \"kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.945153 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.945244 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.946473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.946717 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:30 crc kubenswrapper[4865]: I0216 22:48:30.969254 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69spx\" (UniqueName: \"kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx\") pod \"redhat-operators-fsmg7\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.005040 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.008968 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.014516 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-89cf9" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.056738 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.142732 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.142775 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.142790 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.144214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.144341 4865 generic.go:334] "Generic (PLEG): container finished" podID="d04001fb-b937-477f-b495-17f20e7cf07b" containerID="caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb" exitCode=0 Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.144814 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerDied","Data":"caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.144862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerStarted","Data":"2e64714a5ad44648c9bfc5facc358e38fd8b39879a496cd66c35171ea86b4b0c"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.167758 4865 patch_prober.go:28] interesting pod/console-f9d7485db-5m56v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.167820 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5m56v" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.176385 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.197287 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" event={"ID":"29de9344-e059-4080-8c9f-9d07027204f7","Type":"ContainerStarted","Data":"f8573fb580034d0e6bab8d3c3b652a50161773138de2315e2971bb7883492ec9"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.197336 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" event={"ID":"29de9344-e059-4080-8c9f-9d07027204f7","Type":"ContainerStarted","Data":"b5d9fd98936afa18d4bd940063b75d492eb6d2fec5407384c46eb74dbef36e9c"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.218147 4865 generic.go:334] "Generic (PLEG): container finished" podID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerID="fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b" exitCode=0 Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.218234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerDied","Data":"fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.236262 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k58w4" podStartSLOduration=12.236226705 podStartE2EDuration="12.236226705s" podCreationTimestamp="2026-02-16 22:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:31.231766739 +0000 UTC m=+151.555473700" watchObservedRunningTime="2026-02-16 22:48:31.236226705 +0000 UTC m=+151.559933666" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.252448 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnmj\" (UniqueName: \"kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.252496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.252531 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.279589 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ec8dd52d4cd708d91f3f5f545f284baf5964bca14a751ddf79b94540ec383852"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.290887 4865 generic.go:334] "Generic (PLEG): container finished" podID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerID="933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a" exitCode=0 Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.291017 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerDied","Data":"933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.291053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerStarted","Data":"d887f595c067d0c767517e77303624f5a02e70e51ea053acfe6d4cbc0f2c1260"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.299264 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3be68acc2308d9b67712da27aa40328134a2be3d141249b8896dc541ca10da1e"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.307683 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7a960c13d142ddb06b650ff289360d86b9f650a552788e7e250efda5a8ae5f25"} Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.356750 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnmj\" (UniqueName: \"kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.356800 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.356825 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.363720 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.364223 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.423636 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnmj\" (UniqueName: \"kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj\") pod \"redhat-operators-xsgzz\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.533934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.612910 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:31 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:31 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:31 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.612993 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.639754 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:48:31 crc kubenswrapper[4865]: I0216 22:48:31.984389 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.081103 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.086551 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.201433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume\") pod \"67ee4300-52d2-45bc-8420-9045db672f41\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.201735 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume\") pod \"67ee4300-52d2-45bc-8420-9045db672f41\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.201807 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2gwt\" (UniqueName: \"kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt\") pod \"67ee4300-52d2-45bc-8420-9045db672f41\" (UID: \"67ee4300-52d2-45bc-8420-9045db672f41\") " Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.205996 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume" (OuterVolumeSpecName: "config-volume") pod "67ee4300-52d2-45bc-8420-9045db672f41" (UID: "67ee4300-52d2-45bc-8420-9045db672f41"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.234708 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67ee4300-52d2-45bc-8420-9045db672f41" (UID: "67ee4300-52d2-45bc-8420-9045db672f41"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.235025 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt" (OuterVolumeSpecName: "kube-api-access-s2gwt") pod "67ee4300-52d2-45bc-8420-9045db672f41" (UID: "67ee4300-52d2-45bc-8420-9045db672f41"). InnerVolumeSpecName "kube-api-access-s2gwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.316862 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67ee4300-52d2-45bc-8420-9045db672f41-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.316914 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ee4300-52d2-45bc-8420-9045db672f41-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.316931 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2gwt\" (UniqueName: \"kubernetes.io/projected/67ee4300-52d2-45bc-8420-9045db672f41-kube-api-access-s2gwt\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.323415 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerStarted","Data":"d45fa6638116e867379a571eac3102873cd3ee9c48af034d0e37470ea4bbeff9"} Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.324926 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerStarted","Data":"8ffa4dd2e0fe7314907ae9f549f56e7fc0d78c2f7cf22b6d3204d3f2a8b31d3c"} Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.328803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" event={"ID":"0d54cdef-872b-4b15-ad66-92a5aa695143","Type":"ContainerStarted","Data":"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332"} Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.328848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" event={"ID":"0d54cdef-872b-4b15-ad66-92a5aa695143","Type":"ContainerStarted","Data":"b4ba6a06d515386b96ec6b7bd421e558797536539a3da759b9c6f8d3dd630b04"} Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.330628 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.335087 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.335064 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x" event={"ID":"67ee4300-52d2-45bc-8420-9045db672f41","Type":"ContainerDied","Data":"1d166a59f515c54cb80964a8dfc6220c8751ffa63113b174b9e98e46bbf996e4"} Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.335317 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d166a59f515c54cb80964a8dfc6220c8751ffa63113b174b9e98e46bbf996e4" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.352934 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" podStartSLOduration=132.352875263 podStartE2EDuration="2m12.352875263s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:32.349872958 +0000 UTC m=+152.673579919" watchObservedRunningTime="2026-02-16 22:48:32.352875263 +0000 UTC m=+152.676582224" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.479230 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.581223 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 22:48:32 crc kubenswrapper[4865]: E0216 22:48:32.581595 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ee4300-52d2-45bc-8420-9045db672f41" containerName="collect-profiles" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.581612 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ee4300-52d2-45bc-8420-9045db672f41" containerName="collect-profiles" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.581753 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ee4300-52d2-45bc-8420-9045db672f41" containerName="collect-profiles" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.583852 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.586000 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.586186 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.592421 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.593179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.595833 4865 patch_prober.go:28] interesting pod/router-default-5444994796-d2vf7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 22:48:32 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Feb 16 22:48:32 crc kubenswrapper[4865]: [+]process-running ok Feb 16 22:48:32 crc kubenswrapper[4865]: healthz check failed Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.595889 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d2vf7" podUID="ae87db3d-c73d-4e96-8cda-ca3ac846f9b5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.737394 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.737511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.840066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.840228 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.840848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.850198 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-t624g container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.850244 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-t624g container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.850311 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t624g" podUID="852722bc-cd18-4344-b8d1-a01be5c6ea33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.850333 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t624g" podUID="852722bc-cd18-4344-b8d1-a01be5c6ea33" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.890083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:32 crc kubenswrapper[4865]: I0216 22:48:32.918722 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.351892 4865 generic.go:334] "Generic (PLEG): container finished" podID="04b1951f-573b-4bf5-808d-9834250021b6" containerID="0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe" exitCode=0 Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.351982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerDied","Data":"0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe"} Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.369262 4865 generic.go:334] "Generic (PLEG): container finished" podID="bea6b458-5aaa-4764-9f82-24ceff943498" containerID="90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8" exitCode=0 Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.371550 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerDied","Data":"90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8"} Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.501140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.553024 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.602509 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:33 crc kubenswrapper[4865]: W0216 22:48:33.618163 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbbeb7c0a_ca7d_4db1_a544_2fd018cd986a.slice/crio-78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124 WatchSource:0}: Error finding container 78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124: Status 404 returned error can't find the container with id 78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124 Feb 16 22:48:33 crc kubenswrapper[4865]: I0216 22:48:33.619351 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-d2vf7" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.182699 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.183920 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.186238 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.189263 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.197664 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.272495 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.272559 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.374244 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.374321 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.374466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.395646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.409811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a","Type":"ContainerStarted","Data":"78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124"} Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.522778 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:34 crc kubenswrapper[4865]: I0216 22:48:34.988760 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 22:48:35 crc kubenswrapper[4865]: I0216 22:48:35.448599 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"19d1136f-41ad-4bb4-aeac-9154a32ad84d","Type":"ContainerStarted","Data":"45424bce6ea89ce28dfdcbd89253ddce19e4bf3cad9b208ef5ef165f5350d115"} Feb 16 22:48:35 crc kubenswrapper[4865]: I0216 22:48:35.460833 4865 generic.go:334] "Generic (PLEG): container finished" podID="bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" containerID="b743aef04a9dc7a4360fdb4a01587bd7127159aaf4dd697bc829a76501709ef0" exitCode=0 Feb 16 22:48:35 crc kubenswrapper[4865]: I0216 22:48:35.460919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a","Type":"ContainerDied","Data":"b743aef04a9dc7a4360fdb4a01587bd7127159aaf4dd697bc829a76501709ef0"} Feb 16 22:48:36 crc kubenswrapper[4865]: I0216 22:48:36.488608 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"19d1136f-41ad-4bb4-aeac-9154a32ad84d","Type":"ContainerStarted","Data":"94a3e7dafa29ab3c71afec552482f66988a122062096498df74cfb5e2d66dd4d"} Feb 16 22:48:36 crc kubenswrapper[4865]: I0216 22:48:36.512098 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.512073542 podStartE2EDuration="2.512073542s" podCreationTimestamp="2026-02-16 22:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:48:36.504259041 +0000 UTC m=+156.827966002" watchObservedRunningTime="2026-02-16 22:48:36.512073542 +0000 UTC m=+156.835780503" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.015501 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.180744 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access\") pod \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.180827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir\") pod \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\" (UID: \"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a\") " Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.181683 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" (UID: "bbeb7c0a-ca7d-4db1-a544-2fd018cd986a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.220856 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" (UID: "bbeb7c0a-ca7d-4db1-a544-2fd018cd986a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.286368 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.286416 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbeb7c0a-ca7d-4db1-a544-2fd018cd986a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.517950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bbeb7c0a-ca7d-4db1-a544-2fd018cd986a","Type":"ContainerDied","Data":"78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124"} Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.518006 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78daa7c0f15e24f9244796bac752b79c7c9413cb677b9ca3b637fb1e712a4124" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.518110 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.535557 4865 generic.go:334] "Generic (PLEG): container finished" podID="19d1136f-41ad-4bb4-aeac-9154a32ad84d" containerID="94a3e7dafa29ab3c71afec552482f66988a122062096498df74cfb5e2d66dd4d" exitCode=0 Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.535635 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"19d1136f-41ad-4bb4-aeac-9154a32ad84d","Type":"ContainerDied","Data":"94a3e7dafa29ab3c71afec552482f66988a122062096498df74cfb5e2d66dd4d"} Feb 16 22:48:37 crc kubenswrapper[4865]: I0216 22:48:37.668269 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8sd85" Feb 16 22:48:41 crc kubenswrapper[4865]: I0216 22:48:41.197869 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:41 crc kubenswrapper[4865]: I0216 22:48:41.204603 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:48:42 crc kubenswrapper[4865]: I0216 22:48:42.815414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:42 crc kubenswrapper[4865]: I0216 22:48:42.822828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e0ca52e-7cb6-4d90-8d0b-4124cce13447-metrics-certs\") pod \"network-metrics-daemon-ggbcr\" (UID: \"0e0ca52e-7cb6-4d90-8d0b-4124cce13447\") " pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:42 crc kubenswrapper[4865]: I0216 22:48:42.854083 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t624g" Feb 16 22:48:42 crc kubenswrapper[4865]: I0216 22:48:42.936323 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ggbcr" Feb 16 22:48:44 crc kubenswrapper[4865]: I0216 22:48:44.157577 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:48:44 crc kubenswrapper[4865]: I0216 22:48:44.158232 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" containerID="cri-o://d4da7e037d0f8edbc19f1d658169fa06e4e45fc0e7c6e3882cae996325b6a4f8" gracePeriod=30 Feb 16 22:48:44 crc kubenswrapper[4865]: I0216 22:48:44.168157 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:44 crc kubenswrapper[4865]: I0216 22:48:44.168504 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" containerID="cri-o://20d921add1e6a703f02906c61d9d58448163541e3f1feefa9f4613e6a9d1bc69" gracePeriod=30 Feb 16 22:48:45 crc kubenswrapper[4865]: I0216 22:48:45.664991 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:48:45 crc kubenswrapper[4865]: I0216 22:48:45.665100 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:48:46 crc kubenswrapper[4865]: I0216 22:48:46.716121 4865 generic.go:334] "Generic (PLEG): container finished" podID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerID="d4da7e037d0f8edbc19f1d658169fa06e4e45fc0e7c6e3882cae996325b6a4f8" exitCode=0 Feb 16 22:48:46 crc kubenswrapper[4865]: I0216 22:48:46.716179 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" event={"ID":"4e04fc9e-f926-473e-bdf6-f59166ab52f0","Type":"ContainerDied","Data":"d4da7e037d0f8edbc19f1d658169fa06e4e45fc0e7c6e3882cae996325b6a4f8"} Feb 16 22:48:51 crc kubenswrapper[4865]: I0216 22:48:51.012101 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:48:51 crc kubenswrapper[4865]: I0216 22:48:51.036266 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pzmrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 16 22:48:51 crc kubenswrapper[4865]: I0216 22:48:51.036380 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 16 22:48:52 crc kubenswrapper[4865]: I0216 22:48:52.184231 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tztwm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 22:48:52 crc kubenswrapper[4865]: I0216 22:48:52.184399 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 22:48:53 crc kubenswrapper[4865]: I0216 22:48:53.760633 4865 generic.go:334] "Generic (PLEG): container finished" podID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerID="20d921add1e6a703f02906c61d9d58448163541e3f1feefa9f4613e6a9d1bc69" exitCode=0 Feb 16 22:48:53 crc kubenswrapper[4865]: I0216 22:48:53.760691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" event={"ID":"679f6150-3ecb-437d-81bc-9877ad5c3cc4","Type":"ContainerDied","Data":"20d921add1e6a703f02906c61d9d58448163541e3f1feefa9f4613e6a9d1bc69"} Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.586051 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.591539 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.657608 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:48:56 crc kubenswrapper[4865]: E0216 22:48:56.657993 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d1136f-41ad-4bb4-aeac-9154a32ad84d" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658008 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d1136f-41ad-4bb4-aeac-9154a32ad84d" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: E0216 22:48:56.658029 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658037 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: E0216 22:48:56.658052 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658060 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658208 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d1136f-41ad-4bb4-aeac-9154a32ad84d" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658225 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" containerName="route-controller-manager" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658236 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbeb7c0a-ca7d-4db1-a544-2fd018cd986a" containerName="pruner" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.658894 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.661732 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667121 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7xsf\" (UniqueName: \"kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf\") pod \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667233 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert\") pod \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667298 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir\") pod \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667390 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config\") pod \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667422 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca\") pod \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\" (UID: \"679f6150-3ecb-437d-81bc-9877ad5c3cc4\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.667459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access\") pod \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\" (UID: \"19d1136f-41ad-4bb4-aeac-9154a32ad84d\") " Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.669437 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "19d1136f-41ad-4bb4-aeac-9154a32ad84d" (UID: "19d1136f-41ad-4bb4-aeac-9154a32ad84d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.670144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "679f6150-3ecb-437d-81bc-9877ad5c3cc4" (UID: "679f6150-3ecb-437d-81bc-9877ad5c3cc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.670198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config" (OuterVolumeSpecName: "config") pod "679f6150-3ecb-437d-81bc-9877ad5c3cc4" (UID: "679f6150-3ecb-437d-81bc-9877ad5c3cc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.676649 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf" (OuterVolumeSpecName: "kube-api-access-n7xsf") pod "679f6150-3ecb-437d-81bc-9877ad5c3cc4" (UID: "679f6150-3ecb-437d-81bc-9877ad5c3cc4"). InnerVolumeSpecName "kube-api-access-n7xsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.679333 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "19d1136f-41ad-4bb4-aeac-9154a32ad84d" (UID: "19d1136f-41ad-4bb4-aeac-9154a32ad84d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.684440 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "679f6150-3ecb-437d-81bc-9877ad5c3cc4" (UID: "679f6150-3ecb-437d-81bc-9877ad5c3cc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769240 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769823 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769859 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sx4j\" (UniqueName: \"kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769902 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769950 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769962 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/679f6150-3ecb-437d-81bc-9877ad5c3cc4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769975 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769986 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7xsf\" (UniqueName: \"kubernetes.io/projected/679f6150-3ecb-437d-81bc-9877ad5c3cc4-kube-api-access-n7xsf\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.769995 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/679f6150-3ecb-437d-81bc-9877ad5c3cc4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.770004 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19d1136f-41ad-4bb4-aeac-9154a32ad84d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.793354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" event={"ID":"679f6150-3ecb-437d-81bc-9877ad5c3cc4","Type":"ContainerDied","Data":"79362843e111cb41fa3710e0f9a933b24630d83360b4477377be3d7f8516177e"} Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.793425 4865 scope.go:117] "RemoveContainer" containerID="20d921add1e6a703f02906c61d9d58448163541e3f1feefa9f4613e6a9d1bc69" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.793648 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.796818 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"19d1136f-41ad-4bb4-aeac-9154a32ad84d","Type":"ContainerDied","Data":"45424bce6ea89ce28dfdcbd89253ddce19e4bf3cad9b208ef5ef165f5350d115"} Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.796851 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45424bce6ea89ce28dfdcbd89253ddce19e4bf3cad9b208ef5ef165f5350d115" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.796923 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.828250 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.833986 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tztwm"] Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.871571 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.871635 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sx4j\" (UniqueName: \"kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.871688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.871738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.872590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.873704 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.878379 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:56 crc kubenswrapper[4865]: I0216 22:48:56.898879 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sx4j\" (UniqueName: \"kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j\") pod \"route-controller-manager-6bb46c8d9c-w2czz\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:57 crc kubenswrapper[4865]: I0216 22:48:57.022563 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:48:58 crc kubenswrapper[4865]: I0216 22:48:58.427215 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679f6150-3ecb-437d-81bc-9877ad5c3cc4" path="/var/lib/kubelet/pods/679f6150-3ecb-437d-81bc-9877ad5c3cc4/volumes" Feb 16 22:49:01 crc kubenswrapper[4865]: E0216 22:49:01.424179 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 22:49:01 crc kubenswrapper[4865]: E0216 22:49:01.425030 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv6p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kk986_openshift-marketplace(8b55c2ce-ae41-4b11-925e-b6085f288345): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 22:49:01 crc kubenswrapper[4865]: E0216 22:49:01.426268 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kk986" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" Feb 16 22:49:02 crc kubenswrapper[4865]: I0216 22:49:02.032954 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pzmrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 22:49:02 crc kubenswrapper[4865]: I0216 22:49:02.033343 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 22:49:03 crc kubenswrapper[4865]: I0216 22:49:03.017376 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lwxck" Feb 16 22:49:03 crc kubenswrapper[4865]: E0216 22:49:03.126050 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kk986" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" Feb 16 22:49:04 crc kubenswrapper[4865]: I0216 22:49:04.230593 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.426185 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.426781 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69spx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fsmg7_openshift-marketplace(bea6b458-5aaa-4764-9f82-24ceff943498): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.427897 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fsmg7" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.445676 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.445840 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7kbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s2z5v_openshift-marketplace(f6cfa25f-5974-4b2e-9df0-b0e98112b561): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.447012 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s2z5v" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.462001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.474858 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.520750 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.524379 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.524396 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.525125 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" containerName="controller-manager" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.525694 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.535077 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.555552 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.555881 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdnmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xsgzz_openshift-marketplace(04b1951f-573b-4bf5-808d-9834250021b6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.556961 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xsgzz" podUID="04b1951f-573b-4bf5-808d-9834250021b6" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581312 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config\") pod \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581429 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcpg\" (UniqueName: \"kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg\") pod \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581516 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca\") pod \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581544 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert\") pod \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581608 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles\") pod \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\" (UID: \"4e04fc9e-f926-473e-bdf6-f59166ab52f0\") " Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581863 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.581921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.582499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.583109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzkq\" (UniqueName: \"kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.583222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.588536 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config" (OuterVolumeSpecName: "config") pod "4e04fc9e-f926-473e-bdf6-f59166ab52f0" (UID: "4e04fc9e-f926-473e-bdf6-f59166ab52f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.590822 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e04fc9e-f926-473e-bdf6-f59166ab52f0" (UID: "4e04fc9e-f926-473e-bdf6-f59166ab52f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.592390 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e04fc9e-f926-473e-bdf6-f59166ab52f0" (UID: "4e04fc9e-f926-473e-bdf6-f59166ab52f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.602014 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e04fc9e-f926-473e-bdf6-f59166ab52f0" (UID: "4e04fc9e-f926-473e-bdf6-f59166ab52f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.604313 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg" (OuterVolumeSpecName: "kube-api-access-xjcpg") pod "4e04fc9e-f926-473e-bdf6-f59166ab52f0" (UID: "4e04fc9e-f926-473e-bdf6-f59166ab52f0"). InnerVolumeSpecName "kube-api-access-xjcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.617342 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.617547 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d2l24_openshift-marketplace(5196bfb6-4d27-4d41-8310-8efb2b8997bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.618733 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d2l24" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684828 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684863 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfzkq\" (UniqueName: \"kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684939 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684951 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684962 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcpg\" (UniqueName: \"kubernetes.io/projected/4e04fc9e-f926-473e-bdf6-f59166ab52f0-kube-api-access-xjcpg\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684974 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04fc9e-f926-473e-bdf6-f59166ab52f0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.684983 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04fc9e-f926-473e-bdf6-f59166ab52f0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.688556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.688665 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.690733 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.694054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.706332 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfzkq\" (UniqueName: \"kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq\") pod \"controller-manager-86df44649c-gzlw8\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.714969 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.852944 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ggbcr"] Feb 16 22:49:08 crc kubenswrapper[4865]: W0216 22:49:08.860684 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0ca52e_7cb6_4d90_8d0b_4124cce13447.slice/crio-3800d50e8442e70e9f76f9ae0bc57aeceab1df6149f855f0e15019c91aecdd2d WatchSource:0}: Error finding container 3800d50e8442e70e9f76f9ae0bc57aeceab1df6149f855f0e15019c91aecdd2d: Status 404 returned error can't find the container with id 3800d50e8442e70e9f76f9ae0bc57aeceab1df6149f855f0e15019c91aecdd2d Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.860773 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.877916 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerStarted","Data":"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.907652 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerStarted","Data":"0fdea5c9a90df7c9107e6b3341fa9a3b6b9cf8da46612f4baeee11544c8c4996"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.910346 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.910342 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pzmrk" event={"ID":"4e04fc9e-f926-473e-bdf6-f59166ab52f0","Type":"ContainerDied","Data":"abd5767beec1b839d22b2e4166262ac6416520be79a4317285a9f3f5d2187417"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.910680 4865 scope.go:117] "RemoveContainer" containerID="d4da7e037d0f8edbc19f1d658169fa06e4e45fc0e7c6e3882cae996325b6a4f8" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.925670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerStarted","Data":"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.930426 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" event={"ID":"231c959b-7621-4e3e-9adb-404324ee1591","Type":"ContainerStarted","Data":"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.930485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" event={"ID":"231c959b-7621-4e3e-9adb-404324ee1591","Type":"ContainerStarted","Data":"943a6cfcae5423d28297cfdc5dd1ffff8fdef712e053dc1f67f7d91599fd8cfb"} Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.931101 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" podUID="231c959b-7621-4e3e-9adb-404324ee1591" containerName="route-controller-manager" containerID="cri-o://3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122" gracePeriod=30 Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.947716 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xsgzz" podUID="04b1951f-573b-4bf5-808d-9834250021b6" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.948056 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s2z5v" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.948115 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fsmg7" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" Feb 16 22:49:08 crc kubenswrapper[4865]: E0216 22:49:08.955351 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d2l24" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" Feb 16 22:49:08 crc kubenswrapper[4865]: I0216 22:49:08.985651 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" podStartSLOduration=24.985257898 podStartE2EDuration="24.985257898s" podCreationTimestamp="2026-02-16 22:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:08.982471159 +0000 UTC m=+189.306178120" watchObservedRunningTime="2026-02-16 22:49:08.985257898 +0000 UTC m=+189.308964859" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.126913 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.143712 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:49:09 crc kubenswrapper[4865]: W0216 22:49:09.144325 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f282f9_4063_41c9_b1c9_21fd5c1b365b.slice/crio-05db3750ef3cb49c803c67ac57b11455d3511e70db16f8786cafdd089dc0582f WatchSource:0}: Error finding container 05db3750ef3cb49c803c67ac57b11455d3511e70db16f8786cafdd089dc0582f: Status 404 returned error can't find the container with id 05db3750ef3cb49c803c67ac57b11455d3511e70db16f8786cafdd089dc0582f Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.146146 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pzmrk"] Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.335921 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6bb46c8d9c-w2czz_231c959b-7621-4e3e-9adb-404324ee1591/route-controller-manager/0.log" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.336017 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.408237 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config\") pod \"231c959b-7621-4e3e-9adb-404324ee1591\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.408414 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca\") pod \"231c959b-7621-4e3e-9adb-404324ee1591\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.408580 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert\") pod \"231c959b-7621-4e3e-9adb-404324ee1591\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.408623 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sx4j\" (UniqueName: \"kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j\") pod \"231c959b-7621-4e3e-9adb-404324ee1591\" (UID: \"231c959b-7621-4e3e-9adb-404324ee1591\") " Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.409105 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca" (OuterVolumeSpecName: "client-ca") pod "231c959b-7621-4e3e-9adb-404324ee1591" (UID: "231c959b-7621-4e3e-9adb-404324ee1591"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.409898 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config" (OuterVolumeSpecName: "config") pod "231c959b-7621-4e3e-9adb-404324ee1591" (UID: "231c959b-7621-4e3e-9adb-404324ee1591"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.415826 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "231c959b-7621-4e3e-9adb-404324ee1591" (UID: "231c959b-7621-4e3e-9adb-404324ee1591"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.416200 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j" (OuterVolumeSpecName: "kube-api-access-9sx4j") pod "231c959b-7621-4e3e-9adb-404324ee1591" (UID: "231c959b-7621-4e3e-9adb-404324ee1591"). InnerVolumeSpecName "kube-api-access-9sx4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.510613 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/231c959b-7621-4e3e-9adb-404324ee1591-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.510666 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sx4j\" (UniqueName: \"kubernetes.io/projected/231c959b-7621-4e3e-9adb-404324ee1591-kube-api-access-9sx4j\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.510682 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.510694 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/231c959b-7621-4e3e-9adb-404324ee1591-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.937682 4865 generic.go:334] "Generic (PLEG): container finished" podID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerID="0fdea5c9a90df7c9107e6b3341fa9a3b6b9cf8da46612f4baeee11544c8c4996" exitCode=0 Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.937729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerDied","Data":"0fdea5c9a90df7c9107e6b3341fa9a3b6b9cf8da46612f4baeee11544c8c4996"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.940592 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" event={"ID":"0e0ca52e-7cb6-4d90-8d0b-4124cce13447","Type":"ContainerStarted","Data":"2f59b861580327736978cf6e6ef1dba9250a31aebd1d43692b9dd882b31c869c"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.940638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" event={"ID":"0e0ca52e-7cb6-4d90-8d0b-4124cce13447","Type":"ContainerStarted","Data":"0f39b4d33ca796b24cdaea7137f9bf4c955d80487ae85c4201c4b4990c8dccef"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.940651 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ggbcr" event={"ID":"0e0ca52e-7cb6-4d90-8d0b-4124cce13447","Type":"ContainerStarted","Data":"3800d50e8442e70e9f76f9ae0bc57aeceab1df6149f855f0e15019c91aecdd2d"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.948784 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" event={"ID":"d3f282f9-4063-41c9-b1c9-21fd5c1b365b","Type":"ContainerStarted","Data":"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.948843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" event={"ID":"d3f282f9-4063-41c9-b1c9-21fd5c1b365b","Type":"ContainerStarted","Data":"05db3750ef3cb49c803c67ac57b11455d3511e70db16f8786cafdd089dc0582f"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.949399 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.954092 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.962145 4865 generic.go:334] "Generic (PLEG): container finished" podID="d04001fb-b937-477f-b495-17f20e7cf07b" containerID="2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4" exitCode=0 Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.962208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerDied","Data":"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968427 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6bb46c8d9c-w2czz_231c959b-7621-4e3e-9adb-404324ee1591/route-controller-manager/0.log" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968501 4865 generic.go:334] "Generic (PLEG): container finished" podID="231c959b-7621-4e3e-9adb-404324ee1591" containerID="3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122" exitCode=2 Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968603 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" event={"ID":"231c959b-7621-4e3e-9adb-404324ee1591","Type":"ContainerDied","Data":"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968644 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" event={"ID":"231c959b-7621-4e3e-9adb-404324ee1591","Type":"ContainerDied","Data":"943a6cfcae5423d28297cfdc5dd1ffff8fdef712e053dc1f67f7d91599fd8cfb"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968648 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.968674 4865 scope.go:117] "RemoveContainer" containerID="3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122" Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.973580 4865 generic.go:334] "Generic (PLEG): container finished" podID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerID="050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9" exitCode=0 Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.973614 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerDied","Data":"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9"} Feb 16 22:49:09 crc kubenswrapper[4865]: I0216 22:49:09.996370 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" podStartSLOduration=5.996346011 podStartE2EDuration="5.996346011s" podCreationTimestamp="2026-02-16 22:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:09.981483382 +0000 UTC m=+190.305190343" watchObservedRunningTime="2026-02-16 22:49:09.996346011 +0000 UTC m=+190.320052972" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.022878 4865 scope.go:117] "RemoveContainer" containerID="3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122" Feb 16 22:49:10 crc kubenswrapper[4865]: E0216 22:49:10.023638 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122\": container with ID starting with 3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122 not found: ID does not exist" containerID="3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.024174 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122"} err="failed to get container status \"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122\": rpc error: code = NotFound desc = could not find container \"3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122\": container with ID starting with 3e88da1777e7e14e2b63ddba60c8b1d599df0db1e9e9196da6f26bdd25b7e122 not found: ID does not exist" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.035131 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ggbcr" podStartSLOduration=170.035101924 podStartE2EDuration="2m50.035101924s" podCreationTimestamp="2026-02-16 22:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:10.023243309 +0000 UTC m=+190.346950280" watchObservedRunningTime="2026-02-16 22:49:10.035101924 +0000 UTC m=+190.358808885" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.039310 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.045141 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb46c8d9c-w2czz"] Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.424632 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231c959b-7621-4e3e-9adb-404324ee1591" path="/var/lib/kubelet/pods/231c959b-7621-4e3e-9adb-404324ee1591/volumes" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.425224 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e04fc9e-f926-473e-bdf6-f59166ab52f0" path="/var/lib/kubelet/pods/4e04fc9e-f926-473e-bdf6-f59166ab52f0/volumes" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.841974 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:10 crc kubenswrapper[4865]: E0216 22:49:10.843236 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231c959b-7621-4e3e-9adb-404324ee1591" containerName="route-controller-manager" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.843345 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="231c959b-7621-4e3e-9adb-404324ee1591" containerName="route-controller-manager" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.843560 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="231c959b-7621-4e3e-9adb-404324ee1591" containerName="route-controller-manager" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.844115 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.846403 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.846847 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.847059 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.847675 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.847909 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.849522 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.861570 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.937077 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.937130 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.937163 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xj7\" (UniqueName: \"kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:10 crc kubenswrapper[4865]: I0216 22:49:10.937192 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.038353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.038405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.038430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46xj7\" (UniqueName: \"kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.038451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.039999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.042892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.045408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.056870 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xj7\" (UniqueName: \"kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7\") pod \"route-controller-manager-5c5788454d-gqwh4\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.169114 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.631351 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:11 crc kubenswrapper[4865]: W0216 22:49:11.640114 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d5dfb0d_f230_4669_b8a4_f56e99517b2c.slice/crio-cdca124b5bb8067d67fab508b1219f7cd1fc9eb964e02ac3b86af97fecd9cbbd WatchSource:0}: Error finding container cdca124b5bb8067d67fab508b1219f7cd1fc9eb964e02ac3b86af97fecd9cbbd: Status 404 returned error can't find the container with id cdca124b5bb8067d67fab508b1219f7cd1fc9eb964e02ac3b86af97fecd9cbbd Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.764730 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.765808 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.768458 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.768616 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.776156 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.852298 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.852395 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.954933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.955092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.955211 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.977171 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.991324 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" event={"ID":"2d5dfb0d-f230-4669-b8a4-f56e99517b2c","Type":"ContainerStarted","Data":"cdca124b5bb8067d67fab508b1219f7cd1fc9eb964e02ac3b86af97fecd9cbbd"} Feb 16 22:49:11 crc kubenswrapper[4865]: I0216 22:49:11.994919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerStarted","Data":"e9080c1c6feeaf50607c9a99b6c532fa116ba78757abac6d0ebe3365c1a696bd"} Feb 16 22:49:12 crc kubenswrapper[4865]: I0216 22:49:12.020760 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r57lx" podStartSLOduration=4.002590425 podStartE2EDuration="45.020730217s" podCreationTimestamp="2026-02-16 22:48:27 +0000 UTC" firstStartedPulling="2026-02-16 22:48:29.862267959 +0000 UTC m=+150.185974920" lastFinishedPulling="2026-02-16 22:49:10.880407751 +0000 UTC m=+191.204114712" observedRunningTime="2026-02-16 22:49:12.020192822 +0000 UTC m=+192.343899783" watchObservedRunningTime="2026-02-16 22:49:12.020730217 +0000 UTC m=+192.344437178" Feb 16 22:49:12 crc kubenswrapper[4865]: I0216 22:49:12.084619 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:12 crc kubenswrapper[4865]: I0216 22:49:12.572023 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 22:49:12 crc kubenswrapper[4865]: W0216 22:49:12.579076 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfebc96d3_3f95_454a_aa08_6f058be2ba3a.slice/crio-173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54 WatchSource:0}: Error finding container 173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54: Status 404 returned error can't find the container with id 173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54 Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.007012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerStarted","Data":"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f"} Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.009387 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerStarted","Data":"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b"} Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.011013 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" event={"ID":"2d5dfb0d-f230-4669-b8a4-f56e99517b2c","Type":"ContainerStarted","Data":"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108"} Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.012252 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.013123 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"febc96d3-3f95-454a-aa08-6f058be2ba3a","Type":"ContainerStarted","Data":"173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54"} Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.018321 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.036232 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kqlb" podStartSLOduration=3.821686874 podStartE2EDuration="44.036208524s" podCreationTimestamp="2026-02-16 22:48:29 +0000 UTC" firstStartedPulling="2026-02-16 22:48:31.156574249 +0000 UTC m=+151.480281200" lastFinishedPulling="2026-02-16 22:49:11.371095889 +0000 UTC m=+191.694802850" observedRunningTime="2026-02-16 22:49:13.031234464 +0000 UTC m=+193.354941425" watchObservedRunningTime="2026-02-16 22:49:13.036208524 +0000 UTC m=+193.359915485" Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.053311 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" podStartSLOduration=9.053267805 podStartE2EDuration="9.053267805s" podCreationTimestamp="2026-02-16 22:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:13.052876974 +0000 UTC m=+193.376583935" watchObservedRunningTime="2026-02-16 22:49:13.053267805 +0000 UTC m=+193.376974756" Feb 16 22:49:13 crc kubenswrapper[4865]: I0216 22:49:13.436904 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfrgn" podStartSLOduration=3.801479725 podStartE2EDuration="44.436875213s" podCreationTimestamp="2026-02-16 22:48:29 +0000 UTC" firstStartedPulling="2026-02-16 22:48:31.295380013 +0000 UTC m=+151.619086974" lastFinishedPulling="2026-02-16 22:49:11.930775501 +0000 UTC m=+192.254482462" observedRunningTime="2026-02-16 22:49:13.100710923 +0000 UTC m=+193.424417884" watchObservedRunningTime="2026-02-16 22:49:13.436875213 +0000 UTC m=+193.760582174" Feb 16 22:49:14 crc kubenswrapper[4865]: I0216 22:49:14.021238 4865 generic.go:334] "Generic (PLEG): container finished" podID="febc96d3-3f95-454a-aa08-6f058be2ba3a" containerID="9bb3856fbbdc37f85199cdb3d20282f881ed6b2074a10ef133f1b07297ad1432" exitCode=0 Feb 16 22:49:14 crc kubenswrapper[4865]: I0216 22:49:14.021402 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"febc96d3-3f95-454a-aa08-6f058be2ba3a","Type":"ContainerDied","Data":"9bb3856fbbdc37f85199cdb3d20282f881ed6b2074a10ef133f1b07297ad1432"} Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.029374 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerStarted","Data":"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a"} Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.412114 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.510987 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir\") pod \"febc96d3-3f95-454a-aa08-6f058be2ba3a\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.511110 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access\") pod \"febc96d3-3f95-454a-aa08-6f058be2ba3a\" (UID: \"febc96d3-3f95-454a-aa08-6f058be2ba3a\") " Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.511352 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "febc96d3-3f95-454a-aa08-6f058be2ba3a" (UID: "febc96d3-3f95-454a-aa08-6f058be2ba3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.511496 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febc96d3-3f95-454a-aa08-6f058be2ba3a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.518110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "febc96d3-3f95-454a-aa08-6f058be2ba3a" (UID: "febc96d3-3f95-454a-aa08-6f058be2ba3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.612547 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/febc96d3-3f95-454a-aa08-6f058be2ba3a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.664777 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:49:15 crc kubenswrapper[4865]: I0216 22:49:15.664879 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:49:16 crc kubenswrapper[4865]: I0216 22:49:16.036316 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"febc96d3-3f95-454a-aa08-6f058be2ba3a","Type":"ContainerDied","Data":"173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54"} Feb 16 22:49:16 crc kubenswrapper[4865]: I0216 22:49:16.036384 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 22:49:16 crc kubenswrapper[4865]: I0216 22:49:16.036393 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173d1b0a2303cbecfb9da3d04f619320c44ea19a8befc0178047770eaa0b1b54" Feb 16 22:49:16 crc kubenswrapper[4865]: I0216 22:49:16.043222 4865 generic.go:334] "Generic (PLEG): container finished" podID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerID="835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a" exitCode=0 Feb 16 22:49:16 crc kubenswrapper[4865]: I0216 22:49:16.043310 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerDied","Data":"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a"} Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.051239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerStarted","Data":"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83"} Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.070765 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kk986" podStartSLOduration=3.866048416 podStartE2EDuration="49.070739087s" podCreationTimestamp="2026-02-16 22:48:28 +0000 UTC" firstStartedPulling="2026-02-16 22:48:31.242302896 +0000 UTC m=+151.566009857" lastFinishedPulling="2026-02-16 22:49:16.446993557 +0000 UTC m=+196.770700528" observedRunningTime="2026-02-16 22:49:17.06799146 +0000 UTC m=+197.391698421" watchObservedRunningTime="2026-02-16 22:49:17.070739087 +0000 UTC m=+197.394446048" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.760021 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 22:49:17 crc kubenswrapper[4865]: E0216 22:49:17.760351 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febc96d3-3f95-454a-aa08-6f058be2ba3a" containerName="pruner" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.760367 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="febc96d3-3f95-454a-aa08-6f058be2ba3a" containerName="pruner" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.760507 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="febc96d3-3f95-454a-aa08-6f058be2ba3a" containerName="pruner" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.761022 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.763656 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.765626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.774767 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.849727 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.849772 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.849835 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.951494 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.951586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.951611 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.951704 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.951736 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:17 crc kubenswrapper[4865]: I0216 22:49:17.975380 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access\") pod \"installer-9-crc\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.087123 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.244700 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.244982 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.493124 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.526141 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.526213 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.576505 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 22:49:18 crc kubenswrapper[4865]: W0216 22:49:18.581824 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45 WatchSource:0}: Error finding container 1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45: Status 404 returned error can't find the container with id 1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45 Feb 16 22:49:18 crc kubenswrapper[4865]: I0216 22:49:18.585414 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:19 crc kubenswrapper[4865]: I0216 22:49:19.064070 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42505af0-c68a-4a99-926a-25a0dee244de","Type":"ContainerStarted","Data":"1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45"} Feb 16 22:49:19 crc kubenswrapper[4865]: I0216 22:49:19.107603 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:19 crc kubenswrapper[4865]: I0216 22:49:19.899273 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:49:19 crc kubenswrapper[4865]: I0216 22:49:19.899350 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:49:19 crc kubenswrapper[4865]: I0216 22:49:19.947514 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.072525 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42505af0-c68a-4a99-926a-25a0dee244de","Type":"ContainerStarted","Data":"5352fac7514b2173a99c3fd20bf657966feacb14ee0c75228c3af65406592ebf"} Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.098897 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.09887045 podStartE2EDuration="3.09887045s" podCreationTimestamp="2026-02-16 22:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:20.090678689 +0000 UTC m=+200.414385670" watchObservedRunningTime="2026-02-16 22:49:20.09887045 +0000 UTC m=+200.422577401" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.115145 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.283177 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.283254 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.332750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:20 crc kubenswrapper[4865]: I0216 22:49:20.731045 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:49:21 crc kubenswrapper[4865]: I0216 22:49:21.088109 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r57lx" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="registry-server" containerID="cri-o://e9080c1c6feeaf50607c9a99b6c532fa116ba78757abac6d0ebe3365c1a696bd" gracePeriod=2 Feb 16 22:49:21 crc kubenswrapper[4865]: I0216 22:49:21.144887 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.097539 4865 generic.go:334] "Generic (PLEG): container finished" podID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerID="e9080c1c6feeaf50607c9a99b6c532fa116ba78757abac6d0ebe3365c1a696bd" exitCode=0 Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.097632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerDied","Data":"e9080c1c6feeaf50607c9a99b6c532fa116ba78757abac6d0ebe3365c1a696bd"} Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.377964 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.438444 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content\") pod \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.438577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities\") pod \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.438642 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkf99\" (UniqueName: \"kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99\") pod \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\" (UID: \"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2\") " Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.439653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities" (OuterVolumeSpecName: "utilities") pod "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" (UID: "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.444861 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99" (OuterVolumeSpecName: "kube-api-access-xkf99") pod "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" (UID: "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2"). InnerVolumeSpecName "kube-api-access-xkf99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.518092 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" (UID: "63128d09-9bb2-40e2-8c5e-d16c8f54d2c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.541646 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.541685 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:22 crc kubenswrapper[4865]: I0216 22:49:22.541724 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkf99\" (UniqueName: \"kubernetes.io/projected/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2-kube-api-access-xkf99\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.127638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r57lx" event={"ID":"63128d09-9bb2-40e2-8c5e-d16c8f54d2c2","Type":"ContainerDied","Data":"433142a75966b8240cad702dbd6eee510178c2c9a6676280df26ec50367cfc37"} Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.127722 4865 scope.go:117] "RemoveContainer" containerID="e9080c1c6feeaf50607c9a99b6c532fa116ba78757abac6d0ebe3365c1a696bd" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.127898 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r57lx" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.131702 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.133510 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerStarted","Data":"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4"} Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.135084 4865 generic.go:334] "Generic (PLEG): container finished" podID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerID="36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033" exitCode=0 Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.135250 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7kqlb" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="registry-server" containerID="cri-o://469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f" gracePeriod=2 Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.135482 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerDied","Data":"36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033"} Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.155674 4865 scope.go:117] "RemoveContainer" containerID="0fdea5c9a90df7c9107e6b3341fa9a3b6b9cf8da46612f4baeee11544c8c4996" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.213673 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.215552 4865 scope.go:117] "RemoveContainer" containerID="09ea651676573cba4d06506ef667c068fc0efca0a94abca3587377995422fd41" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.216006 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r57lx"] Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.569312 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.664007 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj7fv\" (UniqueName: \"kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv\") pod \"d04001fb-b937-477f-b495-17f20e7cf07b\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.664097 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content\") pod \"d04001fb-b937-477f-b495-17f20e7cf07b\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.664234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities\") pod \"d04001fb-b937-477f-b495-17f20e7cf07b\" (UID: \"d04001fb-b937-477f-b495-17f20e7cf07b\") " Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.665068 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities" (OuterVolumeSpecName: "utilities") pod "d04001fb-b937-477f-b495-17f20e7cf07b" (UID: "d04001fb-b937-477f-b495-17f20e7cf07b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.675177 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv" (OuterVolumeSpecName: "kube-api-access-vj7fv") pod "d04001fb-b937-477f-b495-17f20e7cf07b" (UID: "d04001fb-b937-477f-b495-17f20e7cf07b"). InnerVolumeSpecName "kube-api-access-vj7fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.707987 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d04001fb-b937-477f-b495-17f20e7cf07b" (UID: "d04001fb-b937-477f-b495-17f20e7cf07b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.765701 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj7fv\" (UniqueName: \"kubernetes.io/projected/d04001fb-b937-477f-b495-17f20e7cf07b-kube-api-access-vj7fv\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.765740 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:23 crc kubenswrapper[4865]: I0216 22:49:23.765749 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04001fb-b937-477f-b495-17f20e7cf07b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.108960 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.109317 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" podUID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" containerName="controller-manager" containerID="cri-o://c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2" gracePeriod=30 Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.126329 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.126675 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" podUID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" containerName="route-controller-manager" containerID="cri-o://8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108" gracePeriod=30 Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.144960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerStarted","Data":"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.148672 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerStarted","Data":"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.150814 4865 generic.go:334] "Generic (PLEG): container finished" podID="bea6b458-5aaa-4764-9f82-24ceff943498" containerID="048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4" exitCode=0 Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.150880 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerDied","Data":"048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.155532 4865 generic.go:334] "Generic (PLEG): container finished" podID="d04001fb-b937-477f-b495-17f20e7cf07b" containerID="469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f" exitCode=0 Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.155633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerDied","Data":"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.155673 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kqlb" event={"ID":"d04001fb-b937-477f-b495-17f20e7cf07b","Type":"ContainerDied","Data":"2e64714a5ad44648c9bfc5facc358e38fd8b39879a496cd66c35171ea86b4b0c"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.155698 4865 scope.go:117] "RemoveContainer" containerID="469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.155814 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kqlb" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.162893 4865 generic.go:334] "Generic (PLEG): container finished" podID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerID="a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4" exitCode=0 Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.163095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerDied","Data":"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4"} Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.184780 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2l24" podStartSLOduration=3.589391525 podStartE2EDuration="57.184746721s" podCreationTimestamp="2026-02-16 22:48:27 +0000 UTC" firstStartedPulling="2026-02-16 22:48:29.95306377 +0000 UTC m=+150.276770721" lastFinishedPulling="2026-02-16 22:49:23.548418956 +0000 UTC m=+203.872125917" observedRunningTime="2026-02-16 22:49:24.18292453 +0000 UTC m=+204.506631491" watchObservedRunningTime="2026-02-16 22:49:24.184746721 +0000 UTC m=+204.508453682" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.247807 4865 scope.go:117] "RemoveContainer" containerID="2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.301495 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.302157 4865 scope.go:117] "RemoveContainer" containerID="caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.307563 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kqlb"] Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.344372 4865 scope.go:117] "RemoveContainer" containerID="469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f" Feb 16 22:49:24 crc kubenswrapper[4865]: E0216 22:49:24.344990 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f\": container with ID starting with 469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f not found: ID does not exist" containerID="469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.345032 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f"} err="failed to get container status \"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f\": rpc error: code = NotFound desc = could not find container \"469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f\": container with ID starting with 469fc653279898bf7550ee14f62fe67bc8f7018dace08f1636b032c4a7f8db6f not found: ID does not exist" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.345069 4865 scope.go:117] "RemoveContainer" containerID="2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4" Feb 16 22:49:24 crc kubenswrapper[4865]: E0216 22:49:24.345710 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4\": container with ID starting with 2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4 not found: ID does not exist" containerID="2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.345736 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4"} err="failed to get container status \"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4\": rpc error: code = NotFound desc = could not find container \"2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4\": container with ID starting with 2b670130106dcdcde6f546c9715dc12bdba4989df613811d4c697edce67bf7d4 not found: ID does not exist" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.345750 4865 scope.go:117] "RemoveContainer" containerID="caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb" Feb 16 22:49:24 crc kubenswrapper[4865]: E0216 22:49:24.346329 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb\": container with ID starting with caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb not found: ID does not exist" containerID="caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.346362 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb"} err="failed to get container status \"caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb\": rpc error: code = NotFound desc = could not find container \"caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb\": container with ID starting with caf8affdf1b3fd29d41875ab20a7dc3aa456f0d8a8fcf8103fd2f85c227afacb not found: ID does not exist" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.428814 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" path="/var/lib/kubelet/pods/63128d09-9bb2-40e2-8c5e-d16c8f54d2c2/volumes" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.429549 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" path="/var/lib/kubelet/pods/d04001fb-b937-477f-b495-17f20e7cf07b/volumes" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.695539 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.735010 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.781105 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config\") pod \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.781230 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles\") pod \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.781346 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca\") pod \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.781424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfzkq\" (UniqueName: \"kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq\") pod \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.781452 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert\") pod \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\" (UID: \"d3f282f9-4063-41c9-b1c9-21fd5c1b365b\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.782504 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d3f282f9-4063-41c9-b1c9-21fd5c1b365b" (UID: "d3f282f9-4063-41c9-b1c9-21fd5c1b365b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.782895 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3f282f9-4063-41c9-b1c9-21fd5c1b365b" (UID: "d3f282f9-4063-41c9-b1c9-21fd5c1b365b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.783457 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config" (OuterVolumeSpecName: "config") pod "d3f282f9-4063-41c9-b1c9-21fd5c1b365b" (UID: "d3f282f9-4063-41c9-b1c9-21fd5c1b365b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.793842 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3f282f9-4063-41c9-b1c9-21fd5c1b365b" (UID: "d3f282f9-4063-41c9-b1c9-21fd5c1b365b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.793998 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq" (OuterVolumeSpecName: "kube-api-access-tfzkq") pod "d3f282f9-4063-41c9-b1c9-21fd5c1b365b" (UID: "d3f282f9-4063-41c9-b1c9-21fd5c1b365b"). InnerVolumeSpecName "kube-api-access-tfzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.882974 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca\") pod \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config\") pod \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883152 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46xj7\" (UniqueName: \"kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7\") pod \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883196 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert\") pod \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\" (UID: \"2d5dfb0d-f230-4669-b8a4-f56e99517b2c\") " Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883489 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfzkq\" (UniqueName: \"kubernetes.io/projected/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-kube-api-access-tfzkq\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883506 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883518 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883528 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883537 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f282f9-4063-41c9-b1c9-21fd5c1b365b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883935 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config" (OuterVolumeSpecName: "config") pod "2d5dfb0d-f230-4669-b8a4-f56e99517b2c" (UID: "2d5dfb0d-f230-4669-b8a4-f56e99517b2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.883929 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d5dfb0d-f230-4669-b8a4-f56e99517b2c" (UID: "2d5dfb0d-f230-4669-b8a4-f56e99517b2c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.887630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7" (OuterVolumeSpecName: "kube-api-access-46xj7") pod "2d5dfb0d-f230-4669-b8a4-f56e99517b2c" (UID: "2d5dfb0d-f230-4669-b8a4-f56e99517b2c"). InnerVolumeSpecName "kube-api-access-46xj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.891417 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d5dfb0d-f230-4669-b8a4-f56e99517b2c" (UID: "2d5dfb0d-f230-4669-b8a4-f56e99517b2c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.984621 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.984664 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.984674 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:24 crc kubenswrapper[4865]: I0216 22:49:24.984685 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46xj7\" (UniqueName: \"kubernetes.io/projected/2d5dfb0d-f230-4669-b8a4-f56e99517b2c-kube-api-access-46xj7\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.173670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerStarted","Data":"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.177265 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerStarted","Data":"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.178964 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" containerID="8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108" exitCode=0 Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.179053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" event={"ID":"2d5dfb0d-f230-4669-b8a4-f56e99517b2c","Type":"ContainerDied","Data":"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.179122 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.179157 4865 scope.go:117] "RemoveContainer" containerID="8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.179137 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4" event={"ID":"2d5dfb0d-f230-4669-b8a4-f56e99517b2c","Type":"ContainerDied","Data":"cdca124b5bb8067d67fab508b1219f7cd1fc9eb964e02ac3b86af97fecd9cbbd"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.181048 4865 generic.go:334] "Generic (PLEG): container finished" podID="04b1951f-573b-4bf5-808d-9834250021b6" containerID="c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382" exitCode=0 Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.181142 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerDied","Data":"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.185925 4865 generic.go:334] "Generic (PLEG): container finished" podID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" containerID="c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2" exitCode=0 Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.185975 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" event={"ID":"d3f282f9-4063-41c9-b1c9-21fd5c1b365b","Type":"ContainerDied","Data":"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.186011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" event={"ID":"d3f282f9-4063-41c9-b1c9-21fd5c1b365b","Type":"ContainerDied","Data":"05db3750ef3cb49c803c67ac57b11455d3511e70db16f8786cafdd089dc0582f"} Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.186090 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86df44649c-gzlw8" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.223580 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fsmg7" podStartSLOduration=3.926801627 podStartE2EDuration="55.223547195s" podCreationTimestamp="2026-02-16 22:48:30 +0000 UTC" firstStartedPulling="2026-02-16 22:48:33.375358997 +0000 UTC m=+153.699065958" lastFinishedPulling="2026-02-16 22:49:24.672104565 +0000 UTC m=+204.995811526" observedRunningTime="2026-02-16 22:49:25.200177406 +0000 UTC m=+205.523884367" watchObservedRunningTime="2026-02-16 22:49:25.223547195 +0000 UTC m=+205.547254176" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.228110 4865 scope.go:117] "RemoveContainer" containerID="8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.230446 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108\": container with ID starting with 8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108 not found: ID does not exist" containerID="8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.230508 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108"} err="failed to get container status \"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108\": rpc error: code = NotFound desc = could not find container \"8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108\": container with ID starting with 8175eb00bed796ecf6eb8b154ecc821a3667ec5f82416cdb53ada08a873a5108 not found: ID does not exist" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.230546 4865 scope.go:117] "RemoveContainer" containerID="c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.270198 4865 scope.go:117] "RemoveContainer" containerID="c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.270878 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2\": container with ID starting with c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2 not found: ID does not exist" containerID="c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.270946 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2"} err="failed to get container status \"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2\": rpc error: code = NotFound desc = could not find container \"c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2\": container with ID starting with c20a6f5072dd84e14d16fd29a82c39ac84bcc43963a7a63a521b042172ab29a2 not found: ID does not exist" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.272751 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2z5v" podStartSLOduration=3.567664852 podStartE2EDuration="58.272728662s" podCreationTimestamp="2026-02-16 22:48:27 +0000 UTC" firstStartedPulling="2026-02-16 22:48:29.914257316 +0000 UTC m=+150.237964267" lastFinishedPulling="2026-02-16 22:49:24.619321116 +0000 UTC m=+204.943028077" observedRunningTime="2026-02-16 22:49:25.269012217 +0000 UTC m=+205.592719178" watchObservedRunningTime="2026-02-16 22:49:25.272728662 +0000 UTC m=+205.596435623" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.284769 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.287405 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86df44649c-gzlw8"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.303603 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.307505 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5788454d-gqwh4"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.852925 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853720 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="extract-utilities" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853739 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="extract-utilities" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853769 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853781 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853803 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" containerName="controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853812 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" containerName="controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853824 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="extract-utilities" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853832 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="extract-utilities" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853852 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853863 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853876 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="extract-content" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853885 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="extract-content" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853896 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="extract-content" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853903 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="extract-content" Feb 16 22:49:25 crc kubenswrapper[4865]: E0216 22:49:25.853913 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" containerName="route-controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.853921 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" containerName="route-controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.854055 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04001fb-b937-477f-b495-17f20e7cf07b" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.854067 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" containerName="controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.854081 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="63128d09-9bb2-40e2-8c5e-d16c8f54d2c2" containerName="registry-server" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.854094 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" containerName="route-controller-manager" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.854700 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.856995 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.857716 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.859976 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860166 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860762 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860825 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860918 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860954 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.860992 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.861064 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.861160 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.861184 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.861242 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.862811 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.871573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.874171 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.879608 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.999611 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55px4\" (UniqueName: \"kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.999711 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.999754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ms8\" (UniqueName: \"kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.999794 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:25 crc kubenswrapper[4865]: I0216 22:49:25.999878 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:25.999911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:25.999935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:25.999958 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.000099 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101494 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101539 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101590 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101629 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55px4\" (UniqueName: \"kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101726 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.101773 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ms8\" (UniqueName: \"kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.102762 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.103703 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.104182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.104500 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.104637 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.111554 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.113716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.131299 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55px4\" (UniqueName: \"kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4\") pod \"route-controller-manager-77ddf65dd8-gp6qn\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.132327 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ms8\" (UniqueName: \"kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8\") pod \"controller-manager-6b7785494f-7pg2d\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.195325 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerStarted","Data":"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983"} Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.197896 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.209705 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.436892 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5dfb0d-f230-4669-b8a4-f56e99517b2c" path="/var/lib/kubelet/pods/2d5dfb0d-f230-4669-b8a4-f56e99517b2c/volumes" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.438744 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f282f9-4063-41c9-b1c9-21fd5c1b365b" path="/var/lib/kubelet/pods/d3f282f9-4063-41c9-b1c9-21fd5c1b365b/volumes" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.460515 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsgzz" podStartSLOduration=3.204696153 podStartE2EDuration="55.460492036s" podCreationTimestamp="2026-02-16 22:48:31 +0000 UTC" firstStartedPulling="2026-02-16 22:48:33.355852477 +0000 UTC m=+153.679559438" lastFinishedPulling="2026-02-16 22:49:25.61164836 +0000 UTC m=+205.935355321" observedRunningTime="2026-02-16 22:49:26.219883891 +0000 UTC m=+206.543590852" watchObservedRunningTime="2026-02-16 22:49:26.460492036 +0000 UTC m=+206.784198987" Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.462044 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:26 crc kubenswrapper[4865]: I0216 22:49:26.534047 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:26 crc kubenswrapper[4865]: W0216 22:49:26.543752 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f00cc7_9fc0_44de_ae96_5eeab702d1aa.slice/crio-8c10c557a8efdf768d2a79a7bda7de3d3158a6a6547c4798de55ea2432c6f15c WatchSource:0}: Error finding container 8c10c557a8efdf768d2a79a7bda7de3d3158a6a6547c4798de55ea2432c6f15c: Status 404 returned error can't find the container with id 8c10c557a8efdf768d2a79a7bda7de3d3158a6a6547c4798de55ea2432c6f15c Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.202666 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" event={"ID":"c11df6f3-0440-42bf-ae35-01d11fc19313","Type":"ContainerStarted","Data":"b8f210d5e4c129f54eb58994d9d7ce072a159ed5b4d2ba1c7ae02af4b8471fc9"} Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.204169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.204255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" event={"ID":"c11df6f3-0440-42bf-ae35-01d11fc19313","Type":"ContainerStarted","Data":"278c4cf976a8a9afd1ac27049a417b2d382d79b81d39c739e4446e63ff36ded3"} Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.206886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" event={"ID":"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa","Type":"ContainerStarted","Data":"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410"} Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.206937 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" event={"ID":"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa","Type":"ContainerStarted","Data":"8c10c557a8efdf768d2a79a7bda7de3d3158a6a6547c4798de55ea2432c6f15c"} Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.207245 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.212553 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.212710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.279119 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" podStartSLOduration=3.279098751 podStartE2EDuration="3.279098751s" podCreationTimestamp="2026-02-16 22:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:27.242681784 +0000 UTC m=+207.566388735" watchObservedRunningTime="2026-02-16 22:49:27.279098751 +0000 UTC m=+207.602805712" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.281695 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" podStartSLOduration=3.281688664 podStartE2EDuration="3.281688664s" podCreationTimestamp="2026-02-16 22:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:27.277771664 +0000 UTC m=+207.601478625" watchObservedRunningTime="2026-02-16 22:49:27.281688664 +0000 UTC m=+207.605395625" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.893109 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.893835 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:49:27 crc kubenswrapper[4865]: I0216 22:49:27.959148 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:49:28 crc kubenswrapper[4865]: I0216 22:49:28.052302 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:49:28 crc kubenswrapper[4865]: I0216 22:49:28.052361 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:49:28 crc kubenswrapper[4865]: I0216 22:49:28.097501 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:49:28 crc kubenswrapper[4865]: I0216 22:49:28.285111 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:49:28 crc kubenswrapper[4865]: I0216 22:49:28.568752 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:29 crc kubenswrapper[4865]: I0216 22:49:29.531758 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:49:29 crc kubenswrapper[4865]: I0216 22:49:29.532040 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kk986" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="registry-server" containerID="cri-o://9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83" gracePeriod=2 Feb 16 22:49:29 crc kubenswrapper[4865]: I0216 22:49:29.975438 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.070941 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv6p4\" (UniqueName: \"kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4\") pod \"8b55c2ce-ae41-4b11-925e-b6085f288345\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.071082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities\") pod \"8b55c2ce-ae41-4b11-925e-b6085f288345\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.071157 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content\") pod \"8b55c2ce-ae41-4b11-925e-b6085f288345\" (UID: \"8b55c2ce-ae41-4b11-925e-b6085f288345\") " Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.072113 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities" (OuterVolumeSpecName: "utilities") pod "8b55c2ce-ae41-4b11-925e-b6085f288345" (UID: "8b55c2ce-ae41-4b11-925e-b6085f288345"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.084587 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4" (OuterVolumeSpecName: "kube-api-access-lv6p4") pod "8b55c2ce-ae41-4b11-925e-b6085f288345" (UID: "8b55c2ce-ae41-4b11-925e-b6085f288345"). InnerVolumeSpecName "kube-api-access-lv6p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.144183 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b55c2ce-ae41-4b11-925e-b6085f288345" (UID: "8b55c2ce-ae41-4b11-925e-b6085f288345"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.174043 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.174132 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b55c2ce-ae41-4b11-925e-b6085f288345-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.174151 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv6p4\" (UniqueName: \"kubernetes.io/projected/8b55c2ce-ae41-4b11-925e-b6085f288345-kube-api-access-lv6p4\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.225513 4865 generic.go:334] "Generic (PLEG): container finished" podID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerID="9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83" exitCode=0 Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.226197 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk986" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.226844 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerDied","Data":"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83"} Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.226894 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk986" event={"ID":"8b55c2ce-ae41-4b11-925e-b6085f288345","Type":"ContainerDied","Data":"faa37072a814f3de54c35d0088cf655c9b45658a1775adb6fa19c29b852613d8"} Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.226920 4865 scope.go:117] "RemoveContainer" containerID="9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.243884 4865 scope.go:117] "RemoveContainer" containerID="835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.273436 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.273520 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kk986"] Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.273686 4865 scope.go:117] "RemoveContainer" containerID="fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.292676 4865 scope.go:117] "RemoveContainer" containerID="9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83" Feb 16 22:49:30 crc kubenswrapper[4865]: E0216 22:49:30.293621 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83\": container with ID starting with 9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83 not found: ID does not exist" containerID="9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.293723 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83"} err="failed to get container status \"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83\": rpc error: code = NotFound desc = could not find container \"9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83\": container with ID starting with 9fe13088b5e65e1272bdf1ad03fdf178cc471820c4f3adcaafc3914d990c0e83 not found: ID does not exist" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.293790 4865 scope.go:117] "RemoveContainer" containerID="835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a" Feb 16 22:49:30 crc kubenswrapper[4865]: E0216 22:49:30.294701 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a\": container with ID starting with 835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a not found: ID does not exist" containerID="835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.294769 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a"} err="failed to get container status \"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a\": rpc error: code = NotFound desc = could not find container \"835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a\": container with ID starting with 835f9e0febcb979891b5e547ba7facf68bc53e9e4df1adf73e30dea3b534f06a not found: ID does not exist" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.294796 4865 scope.go:117] "RemoveContainer" containerID="fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b" Feb 16 22:49:30 crc kubenswrapper[4865]: E0216 22:49:30.295405 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b\": container with ID starting with fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b not found: ID does not exist" containerID="fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.295462 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b"} err="failed to get container status \"fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b\": rpc error: code = NotFound desc = could not find container \"fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b\": container with ID starting with fbcb3ee458261beb8c5e6ab8c23377fcf0c3d52e4bd11bb4b41ab988fa270e1b not found: ID does not exist" Feb 16 22:49:30 crc kubenswrapper[4865]: I0216 22:49:30.423210 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" path="/var/lib/kubelet/pods/8b55c2ce-ae41-4b11-925e-b6085f288345/volumes" Feb 16 22:49:31 crc kubenswrapper[4865]: I0216 22:49:31.057680 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:49:31 crc kubenswrapper[4865]: I0216 22:49:31.058098 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:49:31 crc kubenswrapper[4865]: I0216 22:49:31.535327 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:31 crc kubenswrapper[4865]: I0216 22:49:31.535394 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:32 crc kubenswrapper[4865]: I0216 22:49:32.104360 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fsmg7" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="registry-server" probeResult="failure" output=< Feb 16 22:49:32 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 22:49:32 crc kubenswrapper[4865]: > Feb 16 22:49:32 crc kubenswrapper[4865]: I0216 22:49:32.584885 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xsgzz" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="registry-server" probeResult="failure" output=< Feb 16 22:49:32 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 22:49:32 crc kubenswrapper[4865]: > Feb 16 22:49:38 crc kubenswrapper[4865]: I0216 22:49:38.125671 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:49:41 crc kubenswrapper[4865]: I0216 22:49:41.098556 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:49:41 crc kubenswrapper[4865]: I0216 22:49:41.151327 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:49:41 crc kubenswrapper[4865]: I0216 22:49:41.599250 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:41 crc kubenswrapper[4865]: I0216 22:49:41.659637 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:42 crc kubenswrapper[4865]: I0216 22:49:42.158578 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-675st"] Feb 16 22:49:42 crc kubenswrapper[4865]: I0216 22:49:42.381357 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:49:43 crc kubenswrapper[4865]: I0216 22:49:43.317366 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsgzz" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="registry-server" containerID="cri-o://ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983" gracePeriod=2 Feb 16 22:49:43 crc kubenswrapper[4865]: I0216 22:49:43.871300 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.008447 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content\") pod \"04b1951f-573b-4bf5-808d-9834250021b6\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.008589 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities\") pod \"04b1951f-573b-4bf5-808d-9834250021b6\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.009967 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdnmj\" (UniqueName: \"kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj\") pod \"04b1951f-573b-4bf5-808d-9834250021b6\" (UID: \"04b1951f-573b-4bf5-808d-9834250021b6\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.010589 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities" (OuterVolumeSpecName: "utilities") pod "04b1951f-573b-4bf5-808d-9834250021b6" (UID: "04b1951f-573b-4bf5-808d-9834250021b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.025065 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj" (OuterVolumeSpecName: "kube-api-access-rdnmj") pod "04b1951f-573b-4bf5-808d-9834250021b6" (UID: "04b1951f-573b-4bf5-808d-9834250021b6"). InnerVolumeSpecName "kube-api-access-rdnmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.111944 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdnmj\" (UniqueName: \"kubernetes.io/projected/04b1951f-573b-4bf5-808d-9834250021b6-kube-api-access-rdnmj\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.111996 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.149842 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.150170 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" podUID="c11df6f3-0440-42bf-ae35-01d11fc19313" containerName="controller-manager" containerID="cri-o://b8f210d5e4c129f54eb58994d9d7ce072a159ed5b4d2ba1c7ae02af4b8471fc9" gracePeriod=30 Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.170600 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04b1951f-573b-4bf5-808d-9834250021b6" (UID: "04b1951f-573b-4bf5-808d-9834250021b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.213827 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b1951f-573b-4bf5-808d-9834250021b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.232619 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.232936 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" podUID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" containerName="route-controller-manager" containerID="cri-o://1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410" gracePeriod=30 Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.327751 4865 generic.go:334] "Generic (PLEG): container finished" podID="c11df6f3-0440-42bf-ae35-01d11fc19313" containerID="b8f210d5e4c129f54eb58994d9d7ce072a159ed5b4d2ba1c7ae02af4b8471fc9" exitCode=0 Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.327862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" event={"ID":"c11df6f3-0440-42bf-ae35-01d11fc19313","Type":"ContainerDied","Data":"b8f210d5e4c129f54eb58994d9d7ce072a159ed5b4d2ba1c7ae02af4b8471fc9"} Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.331666 4865 generic.go:334] "Generic (PLEG): container finished" podID="04b1951f-573b-4bf5-808d-9834250021b6" containerID="ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983" exitCode=0 Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.331739 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsgzz" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.331721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerDied","Data":"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983"} Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.331914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsgzz" event={"ID":"04b1951f-573b-4bf5-808d-9834250021b6","Type":"ContainerDied","Data":"d45fa6638116e867379a571eac3102873cd3ee9c48af034d0e37470ea4bbeff9"} Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.331952 4865 scope.go:117] "RemoveContainer" containerID="ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.354500 4865 scope.go:117] "RemoveContainer" containerID="c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.372517 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.375518 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsgzz"] Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.385640 4865 scope.go:117] "RemoveContainer" containerID="0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.403366 4865 scope.go:117] "RemoveContainer" containerID="ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983" Feb 16 22:49:44 crc kubenswrapper[4865]: E0216 22:49:44.403901 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983\": container with ID starting with ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983 not found: ID does not exist" containerID="ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.403937 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983"} err="failed to get container status \"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983\": rpc error: code = NotFound desc = could not find container \"ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983\": container with ID starting with ec4c4bced33395014a60f4d1445d4e012b31b1f46f10475a84c3b05a851a9983 not found: ID does not exist" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.403980 4865 scope.go:117] "RemoveContainer" containerID="c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382" Feb 16 22:49:44 crc kubenswrapper[4865]: E0216 22:49:44.404245 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382\": container with ID starting with c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382 not found: ID does not exist" containerID="c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.404310 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382"} err="failed to get container status \"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382\": rpc error: code = NotFound desc = could not find container \"c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382\": container with ID starting with c4f2a9bfcd4b54a6ffc60fe1471c2ffadfe0123b55eddfca0b841b525305c382 not found: ID does not exist" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.404356 4865 scope.go:117] "RemoveContainer" containerID="0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe" Feb 16 22:49:44 crc kubenswrapper[4865]: E0216 22:49:44.405016 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe\": container with ID starting with 0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe not found: ID does not exist" containerID="0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.405083 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe"} err="failed to get container status \"0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe\": rpc error: code = NotFound desc = could not find container \"0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe\": container with ID starting with 0a3c4422ab7dddecd5ac13cfa63f4028a84b4432be6e476fdad03ae6bbf56dfe not found: ID does not exist" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.427591 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b1951f-573b-4bf5-808d-9834250021b6" path="/var/lib/kubelet/pods/04b1951f-573b-4bf5-808d-9834250021b6/volumes" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.692594 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.764213 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.823433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert\") pod \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.823485 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca\") pod \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.823552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config\") pod \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.823584 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55px4\" (UniqueName: \"kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4\") pod \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\" (UID: \"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.824256 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" (UID: "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.824960 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config" (OuterVolumeSpecName: "config") pod "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" (UID: "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.827432 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4" (OuterVolumeSpecName: "kube-api-access-55px4") pod "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" (UID: "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa"). InnerVolumeSpecName "kube-api-access-55px4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.828431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" (UID: "e7f00cc7-9fc0-44de-ae96-5eeab702d1aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924494 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert\") pod \"c11df6f3-0440-42bf-ae35-01d11fc19313\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924606 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles\") pod \"c11df6f3-0440-42bf-ae35-01d11fc19313\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924666 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config\") pod \"c11df6f3-0440-42bf-ae35-01d11fc19313\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca\") pod \"c11df6f3-0440-42bf-ae35-01d11fc19313\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924744 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6ms8\" (UniqueName: \"kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8\") pod \"c11df6f3-0440-42bf-ae35-01d11fc19313\" (UID: \"c11df6f3-0440-42bf-ae35-01d11fc19313\") " Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924977 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.924995 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.925007 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.925018 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55px4\" (UniqueName: \"kubernetes.io/projected/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa-kube-api-access-55px4\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.925516 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c11df6f3-0440-42bf-ae35-01d11fc19313" (UID: "c11df6f3-0440-42bf-ae35-01d11fc19313"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.925929 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config" (OuterVolumeSpecName: "config") pod "c11df6f3-0440-42bf-ae35-01d11fc19313" (UID: "c11df6f3-0440-42bf-ae35-01d11fc19313"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.925945 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca" (OuterVolumeSpecName: "client-ca") pod "c11df6f3-0440-42bf-ae35-01d11fc19313" (UID: "c11df6f3-0440-42bf-ae35-01d11fc19313"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.928521 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c11df6f3-0440-42bf-ae35-01d11fc19313" (UID: "c11df6f3-0440-42bf-ae35-01d11fc19313"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:49:44 crc kubenswrapper[4865]: I0216 22:49:44.928554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8" (OuterVolumeSpecName: "kube-api-access-c6ms8") pod "c11df6f3-0440-42bf-ae35-01d11fc19313" (UID: "c11df6f3-0440-42bf-ae35-01d11fc19313"). InnerVolumeSpecName "kube-api-access-c6ms8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.027101 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6ms8\" (UniqueName: \"kubernetes.io/projected/c11df6f3-0440-42bf-ae35-01d11fc19313-kube-api-access-c6ms8\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.027151 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c11df6f3-0440-42bf-ae35-01d11fc19313-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.027165 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.027176 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.027186 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c11df6f3-0440-42bf-ae35-01d11fc19313-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.340449 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.340671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7785494f-7pg2d" event={"ID":"c11df6f3-0440-42bf-ae35-01d11fc19313","Type":"ContainerDied","Data":"278c4cf976a8a9afd1ac27049a417b2d382d79b81d39c739e4446e63ff36ded3"} Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.340745 4865 scope.go:117] "RemoveContainer" containerID="b8f210d5e4c129f54eb58994d9d7ce072a159ed5b4d2ba1c7ae02af4b8471fc9" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.342753 4865 generic.go:334] "Generic (PLEG): container finished" podID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" containerID="1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410" exitCode=0 Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.342832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" event={"ID":"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa","Type":"ContainerDied","Data":"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410"} Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.342913 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" event={"ID":"e7f00cc7-9fc0-44de-ae96-5eeab702d1aa","Type":"ContainerDied","Data":"8c10c557a8efdf768d2a79a7bda7de3d3158a6a6547c4798de55ea2432c6f15c"} Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.342987 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.363799 4865 scope.go:117] "RemoveContainer" containerID="1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.383970 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.388804 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b7785494f-7pg2d"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.389352 4865 scope.go:117] "RemoveContainer" containerID="1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.389928 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410\": container with ID starting with 1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410 not found: ID does not exist" containerID="1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.390036 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410"} err="failed to get container status \"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410\": rpc error: code = NotFound desc = could not find container \"1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410\": container with ID starting with 1cebc82cbcfdecd4c4156dbbe80686ad17b4181232d72c21842b99f7832dc410 not found: ID does not exist" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.402305 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.402362 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddf65dd8-gp6qn"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.664127 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.664555 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.664673 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.665364 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.665533 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7" gracePeriod=600 Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.866621 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9b5594b6-6jxls"] Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867546 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="extract-content" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867570 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="extract-content" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867613 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11df6f3-0440-42bf-ae35-01d11fc19313" containerName="controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867625 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11df6f3-0440-42bf-ae35-01d11fc19313" containerName="controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867641 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="extract-utilities" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867650 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="extract-utilities" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867690 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867699 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867713 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867724 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867735 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" containerName="route-controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867766 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" containerName="route-controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867781 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="extract-content" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867789 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="extract-content" Feb 16 22:49:45 crc kubenswrapper[4865]: E0216 22:49:45.867801 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="extract-utilities" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.867809 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="extract-utilities" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.868096 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b55c2ce-ae41-4b11-925e-b6085f288345" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.868149 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b1951f-573b-4bf5-808d-9834250021b6" containerName="registry-server" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.868184 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11df6f3-0440-42bf-ae35-01d11fc19313" containerName="controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.868210 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" containerName="route-controller-manager" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.871627 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.872905 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.876820 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.877597 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.882217 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.882763 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.885769 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.886220 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.886697 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.887296 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.887960 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.888192 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.888302 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.888809 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.889193 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.902180 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.902787 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8"] Feb 16 22:49:45 crc kubenswrapper[4865]: I0216 22:49:45.904511 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9b5594b6-6jxls"] Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.041920 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/216bed02-0b05-4cef-b94b-c53b9c78a597-serving-cert\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042253 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe8944a2-ee82-4839-b326-104b9f48bbd1-serving-cert\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042393 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-client-ca\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042490 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2zn\" (UniqueName: \"kubernetes.io/projected/216bed02-0b05-4cef-b94b-c53b9c78a597-kube-api-access-xs2zn\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042591 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-config\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042710 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-client-ca\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042803 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-proxy-ca-bundles\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.042925 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdk5\" (UniqueName: \"kubernetes.io/projected/fe8944a2-ee82-4839-b326-104b9f48bbd1-kube-api-access-8hdk5\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.043024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-config\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.144247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-config\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.144583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdk5\" (UniqueName: \"kubernetes.io/projected/fe8944a2-ee82-4839-b326-104b9f48bbd1-kube-api-access-8hdk5\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.144788 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/216bed02-0b05-4cef-b94b-c53b9c78a597-serving-cert\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.145620 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe8944a2-ee82-4839-b326-104b9f48bbd1-serving-cert\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.145783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-client-ca\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.145906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2zn\" (UniqueName: \"kubernetes.io/projected/216bed02-0b05-4cef-b94b-c53b9c78a597-kube-api-access-xs2zn\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.146021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-config\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.146183 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-client-ca\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.146303 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-proxy-ca-bundles\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.145641 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-config\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.146657 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-client-ca\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.146851 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/216bed02-0b05-4cef-b94b-c53b9c78a597-client-ca\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.147444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-config\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.147902 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe8944a2-ee82-4839-b326-104b9f48bbd1-proxy-ca-bundles\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.153343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/216bed02-0b05-4cef-b94b-c53b9c78a597-serving-cert\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.153837 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe8944a2-ee82-4839-b326-104b9f48bbd1-serving-cert\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.167040 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdk5\" (UniqueName: \"kubernetes.io/projected/fe8944a2-ee82-4839-b326-104b9f48bbd1-kube-api-access-8hdk5\") pod \"controller-manager-7d9b5594b6-6jxls\" (UID: \"fe8944a2-ee82-4839-b326-104b9f48bbd1\") " pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.167780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2zn\" (UniqueName: \"kubernetes.io/projected/216bed02-0b05-4cef-b94b-c53b9c78a597-kube-api-access-xs2zn\") pod \"route-controller-manager-5b8665584b-ll6f8\" (UID: \"216bed02-0b05-4cef-b94b-c53b9c78a597\") " pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.229827 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.239494 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.377075 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7" exitCode=0 Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.377510 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7"} Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.377548 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2"} Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.426732 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11df6f3-0440-42bf-ae35-01d11fc19313" path="/var/lib/kubelet/pods/c11df6f3-0440-42bf-ae35-01d11fc19313/volumes" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.427613 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f00cc7-9fc0-44de-ae96-5eeab702d1aa" path="/var/lib/kubelet/pods/e7f00cc7-9fc0-44de-ae96-5eeab702d1aa/volumes" Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.701208 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9b5594b6-6jxls"] Feb 16 22:49:46 crc kubenswrapper[4865]: W0216 22:49:46.705124 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe8944a2_ee82_4839_b326_104b9f48bbd1.slice/crio-5b9003457d8be5f91c27c54484060c46aa1728d061cc178a1f93f4e29b5beef0 WatchSource:0}: Error finding container 5b9003457d8be5f91c27c54484060c46aa1728d061cc178a1f93f4e29b5beef0: Status 404 returned error can't find the container with id 5b9003457d8be5f91c27c54484060c46aa1728d061cc178a1f93f4e29b5beef0 Feb 16 22:49:46 crc kubenswrapper[4865]: I0216 22:49:46.756029 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8"] Feb 16 22:49:46 crc kubenswrapper[4865]: W0216 22:49:46.768088 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216bed02_0b05_4cef_b94b_c53b9c78a597.slice/crio-6e91a25ce65f70a48b46d594b1a4b62966950add2224e3a40ab0a43a881f6d97 WatchSource:0}: Error finding container 6e91a25ce65f70a48b46d594b1a4b62966950add2224e3a40ab0a43a881f6d97: Status 404 returned error can't find the container with id 6e91a25ce65f70a48b46d594b1a4b62966950add2224e3a40ab0a43a881f6d97 Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.390016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" event={"ID":"fe8944a2-ee82-4839-b326-104b9f48bbd1","Type":"ContainerStarted","Data":"a91c828114120d72d79e7a85ccb6d3e9ba23e5183cf59826df237d2b51f21e98"} Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.391893 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" event={"ID":"fe8944a2-ee82-4839-b326-104b9f48bbd1","Type":"ContainerStarted","Data":"5b9003457d8be5f91c27c54484060c46aa1728d061cc178a1f93f4e29b5beef0"} Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.391974 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" event={"ID":"216bed02-0b05-4cef-b94b-c53b9c78a597","Type":"ContainerStarted","Data":"34a5c33ce2ce30eb89c3858430e35555722ce32ba0654a87c09986c855a65915"} Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.392048 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.392163 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.392229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" event={"ID":"216bed02-0b05-4cef-b94b-c53b9c78a597","Type":"ContainerStarted","Data":"6e91a25ce65f70a48b46d594b1a4b62966950add2224e3a40ab0a43a881f6d97"} Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.396200 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.418076 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d9b5594b6-6jxls" podStartSLOduration=3.418050205 podStartE2EDuration="3.418050205s" podCreationTimestamp="2026-02-16 22:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:47.412567281 +0000 UTC m=+227.736274242" watchObservedRunningTime="2026-02-16 22:49:47.418050205 +0000 UTC m=+227.741757166" Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.471983 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" podStartSLOduration=3.471942825 podStartE2EDuration="3.471942825s" podCreationTimestamp="2026-02-16 22:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:49:47.458772094 +0000 UTC m=+227.782479055" watchObservedRunningTime="2026-02-16 22:49:47.471942825 +0000 UTC m=+227.795649786" Feb 16 22:49:47 crc kubenswrapper[4865]: I0216 22:49:47.576121 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.383611 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.385570 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.385856 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.386044 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b" gracePeriod=15 Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.386352 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b" gracePeriod=15 Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.386372 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44" gracePeriod=15 Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.386395 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26" gracePeriod=15 Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.386989 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387141 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387162 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387172 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387180 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387191 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387198 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387207 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387213 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387222 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387228 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387237 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387243 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387252 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387258 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387385 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387394 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387405 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387413 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387421 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387432 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.387568 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387581 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387687 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.387091 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141" gracePeriod=15 Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.495997 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537367 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537535 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537611 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537735 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.537907 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.639833 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.639953 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640017 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640061 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640108 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640250 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640495 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640536 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640533 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640600 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640591 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640664 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.640626 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: I0216 22:49:57.786144 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:49:57 crc kubenswrapper[4865]: E0216 22:49:57.833040 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894dbc636c00d76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,LastTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.504961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"15083fa72a2c5f5b1746ca067e0083785a065e18ff281c809cbbea9434938a74"} Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.505059 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7804c71462ab871b94449acad111dff88e6bfec0f00326451091c01f46d9af2d"} Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.506451 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.510726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.514724 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.516551 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141" exitCode=0 Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.516596 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26" exitCode=0 Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.516607 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b" exitCode=0 Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.516617 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44" exitCode=2 Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.516759 4865 scope.go:117] "RemoveContainer" containerID="e74a443b63735422bd0f5ad851213f393074c4bda570e41ff102c84aa9ac2929" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.520446 4865 generic.go:334] "Generic (PLEG): container finished" podID="42505af0-c68a-4a99-926a-25a0dee244de" containerID="5352fac7514b2173a99c3fd20bf657966feacb14ee0c75228c3af65406592ebf" exitCode=0 Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.520508 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42505af0-c68a-4a99-926a-25a0dee244de","Type":"ContainerDied","Data":"5352fac7514b2173a99c3fd20bf657966feacb14ee0c75228c3af65406592ebf"} Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.521420 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:58 crc kubenswrapper[4865]: I0216 22:49:58.522412 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: E0216 22:49:59.472740 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" volumeName="registry-storage" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.558046 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.885831 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.887128 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.887538 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.887641 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.887803 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.888216 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.888812 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.889364 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.957068 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.957784 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.958032 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.958420 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988802 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988840 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access\") pod \"42505af0-c68a-4a99-926a-25a0dee244de\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988866 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988897 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock\") pod \"42505af0-c68a-4a99-926a-25a0dee244de\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.988961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock" (OuterVolumeSpecName: "var-lock") pod "42505af0-c68a-4a99-926a-25a0dee244de" (UID: "42505af0-c68a-4a99-926a-25a0dee244de"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989093 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir\") pod \"42505af0-c68a-4a99-926a-25a0dee244de\" (UID: \"42505af0-c68a-4a99-926a-25a0dee244de\") " Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989122 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42505af0-c68a-4a99-926a-25a0dee244de" (UID: "42505af0-c68a-4a99-926a-25a0dee244de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989720 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989744 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989756 4865 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.989769 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/42505af0-c68a-4a99-926a-25a0dee244de-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 22:49:59 crc kubenswrapper[4865]: I0216 22:49:59.995699 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42505af0-c68a-4a99-926a-25a0dee244de" (UID: "42505af0-c68a-4a99-926a-25a0dee244de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.090416 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42505af0-c68a-4a99-926a-25a0dee244de-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.221593 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.221927 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.222377 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.222683 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.222930 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.222969 4865 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.223333 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="200ms" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.418500 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.419088 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.419312 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.424048 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="400ms" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.428423 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.560059 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894dbc636c00d76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,LastTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.579658 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b" exitCode=0 Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.579870 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.580186 4865 scope.go:117] "RemoveContainer" containerID="d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.581591 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.581925 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.582419 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.583127 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"42505af0-c68a-4a99-926a-25a0dee244de","Type":"ContainerDied","Data":"1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45"} Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.583208 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.583215 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.584952 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.585432 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.585785 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.586649 4865 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/kube-apiserver-crc_openshift-kube-apiserver_kube-apiserver-check-endpoints-d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141.log: no such file or directory" path="/var/log/containers/kube-apiserver-crc_openshift-kube-apiserver_kube-apiserver-check-endpoints-d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141.log" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.588369 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.588606 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.588989 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.611865 4865 scope.go:117] "RemoveContainer" containerID="083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.637447 4865 scope.go:117] "RemoveContainer" containerID="1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.655195 4865 scope.go:117] "RemoveContainer" containerID="c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.675626 4865 scope.go:117] "RemoveContainer" containerID="9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.701137 4865 scope.go:117] "RemoveContainer" containerID="e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.731621 4865 scope.go:117] "RemoveContainer" containerID="d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.732461 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\": container with ID starting with d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141 not found: ID does not exist" containerID="d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.732536 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141"} err="failed to get container status \"d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\": rpc error: code = NotFound desc = could not find container \"d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141\": container with ID starting with d1932a518fb9c7c514939c9efdb9f5ef9b52fa277bf27931a2fe29f04c486141 not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.732590 4865 scope.go:117] "RemoveContainer" containerID="083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.733119 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\": container with ID starting with 083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26 not found: ID does not exist" containerID="083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.733208 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26"} err="failed to get container status \"083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\": rpc error: code = NotFound desc = could not find container \"083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26\": container with ID starting with 083e95882332fbcae3b7e3c07a19c05645e560b7c5e92813464a5ca4e5e52a26 not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.733298 4865 scope.go:117] "RemoveContainer" containerID="1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.733785 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\": container with ID starting with 1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b not found: ID does not exist" containerID="1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.733829 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b"} err="failed to get container status \"1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\": rpc error: code = NotFound desc = could not find container \"1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b\": container with ID starting with 1d5f3b449d7f497dd16213f7a376d725ed75adaeb27449fa2005893b85cd8f9b not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.733857 4865 scope.go:117] "RemoveContainer" containerID="c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.734119 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\": container with ID starting with c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44 not found: ID does not exist" containerID="c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.734162 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44"} err="failed to get container status \"c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\": rpc error: code = NotFound desc = could not find container \"c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44\": container with ID starting with c7ddcd1ac2d814341fef78f1d3dcb8d8e04d5e8506ca793c023085f0e019fc44 not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.734185 4865 scope.go:117] "RemoveContainer" containerID="9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.734531 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\": container with ID starting with 9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b not found: ID does not exist" containerID="9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.734560 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b"} err="failed to get container status \"9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\": rpc error: code = NotFound desc = could not find container \"9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b\": container with ID starting with 9420ab4fc2cfb723f98690b12676378560f2ca512ea2f23ea4108561bcf0a83b not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.734581 4865 scope.go:117] "RemoveContainer" containerID="e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.734840 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\": container with ID starting with e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350 not found: ID does not exist" containerID="e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350" Feb 16 22:50:00 crc kubenswrapper[4865]: I0216 22:50:00.734868 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350"} err="failed to get container status \"e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\": rpc error: code = NotFound desc = could not find container \"e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350\": container with ID starting with e3e53d3f26f921be607e616b7c8790e3be0605a2b94a3f7be2b62e1beaefb350 not found: ID does not exist" Feb 16 22:50:00 crc kubenswrapper[4865]: E0216 22:50:00.825434 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="800ms" Feb 16 22:50:01 crc kubenswrapper[4865]: E0216 22:50:01.627357 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="1.6s" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.560456 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:50:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:50:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:50:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T22:50:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.561124 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.561974 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.562818 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.563577 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:02 crc kubenswrapper[4865]: E0216 22:50:02.563658 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 22:50:03 crc kubenswrapper[4865]: E0216 22:50:03.071657 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:03 crc kubenswrapper[4865]: E0216 22:50:03.230574 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="3.2s" Feb 16 22:50:06 crc kubenswrapper[4865]: E0216 22:50:06.431535 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.53:6443: connect: connection refused" interval="6.4s" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.185880 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-675st" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" containerName="oauth-openshift" containerID="cri-o://6923bced2635912f8ffa4f0386b434c6be6629937d146cda4c448c6f4806aca0" gracePeriod=15 Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.638350 4865 generic.go:334] "Generic (PLEG): container finished" podID="25c0299c-b4a2-4c82-881f-808b610fb325" containerID="6923bced2635912f8ffa4f0386b434c6be6629937d146cda4c448c6f4806aca0" exitCode=0 Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.638525 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-675st" event={"ID":"25c0299c-b4a2-4c82-881f-808b610fb325","Type":"ContainerDied","Data":"6923bced2635912f8ffa4f0386b434c6be6629937d146cda4c448c6f4806aca0"} Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.815428 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.816241 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.817104 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.817549 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.947955 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.948125 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.948189 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.948393 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.948545 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.949580 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.949566 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.949768 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.949848 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhkj\" (UniqueName: \"kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951068 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951127 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951178 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951421 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951517 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951559 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951593 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.951637 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login\") pod \"25c0299c-b4a2-4c82-881f-808b610fb325\" (UID: \"25c0299c-b4a2-4c82-881f-808b610fb325\") " Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.952104 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.952147 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25c0299c-b4a2-4c82-881f-808b610fb325-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.952175 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.952202 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.954516 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.956930 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj" (OuterVolumeSpecName: "kube-api-access-hxhkj") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "kube-api-access-hxhkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.957232 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.964524 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.965337 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.965866 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.969838 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.970328 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.971000 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:07 crc kubenswrapper[4865]: I0216 22:50:07.971137 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "25c0299c-b4a2-4c82-881f-808b610fb325" (UID: "25c0299c-b4a2-4c82-881f-808b610fb325"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053485 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053534 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053548 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053563 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhkj\" (UniqueName: \"kubernetes.io/projected/25c0299c-b4a2-4c82-881f-808b610fb325-kube-api-access-hxhkj\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053575 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053590 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053604 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053615 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053626 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.053637 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25c0299c-b4a2-4c82-881f-808b610fb325-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.649390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-675st" event={"ID":"25c0299c-b4a2-4c82-881f-808b610fb325","Type":"ContainerDied","Data":"9436d27baf44397fd917070ac16ce796acfa207c595aa33d2a0e37ccf8edbe36"} Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.649499 4865 scope.go:117] "RemoveContainer" containerID="6923bced2635912f8ffa4f0386b434c6be6629937d146cda4c448c6f4806aca0" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.650014 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-675st" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.652835 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.653820 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.654468 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.656597 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.657513 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:08 crc kubenswrapper[4865]: I0216 22:50:08.658048 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.414221 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.415315 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.416018 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.416730 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.434371 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.434406 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:09 crc kubenswrapper[4865]: E0216 22:50:09.434876 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.435838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:09 crc kubenswrapper[4865]: I0216 22:50:09.659076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c74c31092cb3b328923e62448fac6ba131c390ad4deed0a9d57659b91dbe4b6"} Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.425315 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.426723 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.427599 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.428330 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: E0216 22:50:10.561890 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.53:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894dbc636c00d76 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,LastTimestamp:2026-02-16 22:49:57.832011126 +0000 UTC m=+238.155718097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.669037 4865 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ec1d8603b42d1ce2d5b3eebb1b14fa34602848ffa2caf8fd5a947813d56442b2" exitCode=0 Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.669118 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ec1d8603b42d1ce2d5b3eebb1b14fa34602848ffa2caf8fd5a947813d56442b2"} Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.669745 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.669813 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:10 crc kubenswrapper[4865]: E0216 22:50:10.670584 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.670599 4865 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.671199 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.671864 4865 status_manager.go:851] "Failed to get status for pod" podUID="42505af0-c68a-4a99-926a-25a0dee244de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:10 crc kubenswrapper[4865]: I0216 22:50:10.672528 4865 status_manager.go:851] "Failed to get status for pod" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" pod="openshift-authentication/oauth-openshift-558db77b4-675st" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-675st\": dial tcp 38.102.83.53:6443: connect: connection refused" Feb 16 22:50:11 crc kubenswrapper[4865]: I0216 22:50:11.685534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"578c2fee1031f3caa9407e2d7b2c7e51ce03482ce45c5d301bb670e1a97e6ed2"} Feb 16 22:50:11 crc kubenswrapper[4865]: I0216 22:50:11.686396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2c684b7e0010afb873e608f927b8880af7bb316c2c8788d1b7a28b3c043ab1b1"} Feb 16 22:50:11 crc kubenswrapper[4865]: I0216 22:50:11.686409 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92c6f05a29140f0906129b1e0f7ee4ccd3972186e4dbcbc28d17b74ee989f1a5"} Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.695133 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.695197 4865 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac" exitCode=1 Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.695311 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac"} Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.696040 4865 scope.go:117] "RemoveContainer" containerID="ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac" Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.698676 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b47ee5edbf0be42a25ca08a98b009634562cce0db384beea08292e46e595e81a"} Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.698708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d02ac532d4ad636ceac83ff1b8bf5b8f566dace41fc8921ca7bde39a6ee61d46"} Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.698894 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.699049 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:12 crc kubenswrapper[4865]: I0216 22:50:12.699141 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:13 crc kubenswrapper[4865]: E0216 22:50:13.219913 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:13 crc kubenswrapper[4865]: I0216 22:50:13.707983 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 22:50:13 crc kubenswrapper[4865]: I0216 22:50:13.708062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"820f60afd9df034363381730b27a77f78012f9de3fb3ec5a9a8efd3670e9744c"} Feb 16 22:50:14 crc kubenswrapper[4865]: I0216 22:50:14.436456 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:14 crc kubenswrapper[4865]: I0216 22:50:14.436874 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:14 crc kubenswrapper[4865]: I0216 22:50:14.443760 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:17 crc kubenswrapper[4865]: I0216 22:50:17.728062 4865 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:17 crc kubenswrapper[4865]: I0216 22:50:17.762851 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8eb70ba-80dc-4449-b629-8f2d3462ac25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:50:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:50:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:50:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T22:50:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1d8603b42d1ce2d5b3eebb1b14fa34602848ffa2caf8fd5a947813d56442b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1d8603b42d1ce2d5b3eebb1b14fa34602848ffa2caf8fd5a947813d56442b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T22:50:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T22:50:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 16 22:50:17 crc kubenswrapper[4865]: I0216 22:50:17.840034 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d159382b-a9ee-4df4-818c-dc8c82ce3dd3" Feb 16 22:50:18 crc kubenswrapper[4865]: I0216 22:50:18.739545 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:18 crc kubenswrapper[4865]: I0216 22:50:18.739594 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:18 crc kubenswrapper[4865]: I0216 22:50:18.743060 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d159382b-a9ee-4df4-818c-dc8c82ce3dd3" Feb 16 22:50:18 crc kubenswrapper[4865]: I0216 22:50:18.746492 4865 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://92c6f05a29140f0906129b1e0f7ee4ccd3972186e4dbcbc28d17b74ee989f1a5" Feb 16 22:50:18 crc kubenswrapper[4865]: I0216 22:50:18.746516 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:19 crc kubenswrapper[4865]: I0216 22:50:19.756566 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:19 crc kubenswrapper[4865]: I0216 22:50:19.757026 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c8eb70ba-80dc-4449-b629-8f2d3462ac25" Feb 16 22:50:19 crc kubenswrapper[4865]: I0216 22:50:19.760059 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d159382b-a9ee-4df4-818c-dc8c82ce3dd3" Feb 16 22:50:20 crc kubenswrapper[4865]: I0216 22:50:20.358321 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:50:21 crc kubenswrapper[4865]: I0216 22:50:21.931607 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:50:21 crc kubenswrapper[4865]: I0216 22:50:21.932240 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 22:50:21 crc kubenswrapper[4865]: I0216 22:50:21.932354 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 22:50:23 crc kubenswrapper[4865]: E0216 22:50:23.396157 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:27 crc kubenswrapper[4865]: I0216 22:50:27.328262 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 22:50:27 crc kubenswrapper[4865]: I0216 22:50:27.533336 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 22:50:27 crc kubenswrapper[4865]: I0216 22:50:27.788579 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 22:50:29 crc kubenswrapper[4865]: I0216 22:50:29.281706 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 22:50:29 crc kubenswrapper[4865]: I0216 22:50:29.333175 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 22:50:29 crc kubenswrapper[4865]: I0216 22:50:29.517043 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 22:50:29 crc kubenswrapper[4865]: I0216 22:50:29.796778 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 22:50:29 crc kubenswrapper[4865]: I0216 22:50:29.849095 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.090047 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.209067 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.287019 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.287265 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.290739 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.373774 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.476116 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.493854 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.536368 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.556353 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.664209 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.756508 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 22:50:30 crc kubenswrapper[4865]: I0216 22:50:30.999567 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.027479 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.053701 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.097610 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.161528 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.227992 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.270264 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.373748 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.397578 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.532742 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.638801 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.692382 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.815541 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.849999 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.932528 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 22:50:31 crc kubenswrapper[4865]: I0216 22:50:31.932630 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.011479 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.049094 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.235499 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.298043 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.429849 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.431520 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.480383 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.495724 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.548421 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.717373 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.764995 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.765693 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 22:50:32 crc kubenswrapper[4865]: I0216 22:50:32.885145 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.022394 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.042077 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.114744 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.132339 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.160644 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.276194 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.372877 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.402974 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.411880 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.507236 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 22:50:33 crc kubenswrapper[4865]: E0216 22:50:33.545336 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.545662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.599865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.639547 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.681699 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.712541 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 22:50:33 crc kubenswrapper[4865]: I0216 22:50:33.777269 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.016463 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.085681 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.096127 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.115732 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.392129 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.394922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.443024 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.514393 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.535799 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.558889 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.638212 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.662154 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.855766 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.855879 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.863947 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.881558 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.953154 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.978442 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.979008 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 22:50:34 crc kubenswrapper[4865]: I0216 22:50:34.984018 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.002621 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.046303 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.073468 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.109218 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.185336 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.232429 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.278777 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.373439 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.557679 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.635130 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.730892 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.747156 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.752635 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.819855 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.877155 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 22:50:35 crc kubenswrapper[4865]: I0216 22:50:35.886550 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:35.999994 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.000379 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.062942 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.082627 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.105898 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.166248 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.221265 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.257572 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.260421 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.265130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.298448 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.359521 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.362109 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.362084412 podStartE2EDuration="39.362084412s" podCreationTimestamp="2026-02-16 22:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:50:17.837140486 +0000 UTC m=+258.160847437" watchObservedRunningTime="2026-02-16 22:50:36.362084412 +0000 UTC m=+276.685791383" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.366389 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-675st"] Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.366481 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.368957 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.373050 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.393536 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.393506639 podStartE2EDuration="19.393506639s" podCreationTimestamp="2026-02-16 22:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:50:36.392225453 +0000 UTC m=+276.715932444" watchObservedRunningTime="2026-02-16 22:50:36.393506639 +0000 UTC m=+276.717213610" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.396105 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.422849 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" path="/var/lib/kubelet/pods/25c0299c-b4a2-4c82-881f-808b610fb325/volumes" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.442320 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.462766 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.471490 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.497058 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.536223 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.572905 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.584853 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.595616 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.647439 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.711406 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.804940 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.947760 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 22:50:36 crc kubenswrapper[4865]: I0216 22:50:36.978541 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.046464 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.224557 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.337369 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.377469 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.437216 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.472123 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.548958 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.569716 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.670818 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.685970 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.762972 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.790784 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.845702 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.873585 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.888818 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.910445 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.933773 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 22:50:37 crc kubenswrapper[4865]: I0216 22:50:37.941431 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.034405 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.048936 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.081217 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.087733 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.274011 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.306139 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.412566 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.492468 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.494250 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.517921 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.578338 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.730943 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.731357 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.756404 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.777730 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 22:50:38 crc kubenswrapper[4865]: I0216 22:50:38.840266 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.011986 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.153749 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.154911 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.271217 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.319967 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.518200 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.601450 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.724690 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.769961 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.785516 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.882141 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.902000 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.941937 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.949657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.962824 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.966137 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 22:50:39 crc kubenswrapper[4865]: I0216 22:50:39.967218 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.061539 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.091182 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.214051 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.271850 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.275397 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.312210 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.349585 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.350038 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://15083fa72a2c5f5b1746ca067e0083785a065e18ff281c809cbbea9434938a74" gracePeriod=5 Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.436657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.457900 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.472310 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.484111 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.531212 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.534493 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.600025 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 22:50:40 crc kubenswrapper[4865]: I0216 22:50:40.743593 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.011134 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.108077 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.165612 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.235444 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.328124 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.361432 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.555909 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.618809 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.708963 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.908750 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.931212 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.931266 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.931342 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.932326 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"820f60afd9df034363381730b27a77f78012f9de3fb3ec5a9a8efd3670e9744c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.932460 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://820f60afd9df034363381730b27a77f78012f9de3fb3ec5a9a8efd3670e9744c" gracePeriod=30 Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.956206 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 22:50:41 crc kubenswrapper[4865]: I0216 22:50:41.985990 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.083217 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.316795 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.363108 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.415124 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.441363 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.598669 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.794431 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.844542 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.897472 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.901142 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.930557 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.958760 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.973911 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 22:50:42 crc kubenswrapper[4865]: I0216 22:50:42.992080 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.090449 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.236047 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.382498 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.468355 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.495449 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 22:50:43 crc kubenswrapper[4865]: E0216 22:50:43.701347 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.713608 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 22:50:43 crc kubenswrapper[4865]: I0216 22:50:43.942529 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.012027 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.032884 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.050085 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.923669 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-xwnx2"] Feb 16 22:50:44 crc kubenswrapper[4865]: E0216 22:50:44.924059 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924084 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 22:50:44 crc kubenswrapper[4865]: E0216 22:50:44.924111 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42505af0-c68a-4a99-926a-25a0dee244de" containerName="installer" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924124 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="42505af0-c68a-4a99-926a-25a0dee244de" containerName="installer" Feb 16 22:50:44 crc kubenswrapper[4865]: E0216 22:50:44.924145 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" containerName="oauth-openshift" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924160 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" containerName="oauth-openshift" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924374 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="42505af0-c68a-4a99-926a-25a0dee244de" containerName="installer" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924391 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c0299c-b4a2-4c82-881f-808b610fb325" containerName="oauth-openshift" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.924409 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.925063 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.929371 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.929386 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.930319 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.930323 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.933814 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.934189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.934502 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.934719 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.935126 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.935883 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936482 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936516 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936541 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936576 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936875 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936884 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab366b4d-d341-4048-8415-b71212a4e451-audit-dir\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.936982 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937021 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-audit-policies\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937073 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5czx\" (UniqueName: \"kubernetes.io/projected/ab366b4d-d341-4048-8415-b71212a4e451-kube-api-access-b5czx\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937234 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.937234 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.948186 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-xwnx2"] Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.949143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.952900 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.965372 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 22:50:44 crc kubenswrapper[4865]: I0216 22:50:44.990415 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.044642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.044809 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.044878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5czx\" (UniqueName: \"kubernetes.io/projected/ab366b4d-d341-4048-8415-b71212a4e451-kube-api-access-b5czx\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045003 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045123 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045328 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045540 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab366b4d-d341-4048-8415-b71212a4e451-audit-dir\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045899 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.045966 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-audit-policies\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.046006 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.047503 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab366b4d-d341-4048-8415-b71212a4e451-audit-dir\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.051095 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.052937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.053179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.054750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-error\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.054803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.055077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-service-ca\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.055163 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.055688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.056304 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-session\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.067844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ab366b4d-d341-4048-8415-b71212a4e451-audit-policies\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.068607 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-system-router-certs\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.068790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ab366b4d-d341-4048-8415-b71212a4e451-v4-0-config-user-template-login\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.072408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5czx\" (UniqueName: \"kubernetes.io/projected/ab366b4d-d341-4048-8415-b71212a4e451-kube-api-access-b5czx\") pod \"oauth-openshift-68974c876c-xwnx2\" (UID: \"ab366b4d-d341-4048-8415-b71212a4e451\") " pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.275195 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.455423 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.557576 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.590605 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68974c876c-xwnx2"] Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.632487 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.829437 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.938338 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.938557 4865 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="15083fa72a2c5f5b1746ca067e0083785a065e18ff281c809cbbea9434938a74" exitCode=137 Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.942597 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" event={"ID":"ab366b4d-d341-4048-8415-b71212a4e451","Type":"ContainerStarted","Data":"bffe1695b9b68c7c0c1fa36a9dab3cb2c78a0a13e019ac585058f80e2681c9f8"} Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.943110 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.943132 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" event={"ID":"ab366b4d-d341-4048-8415-b71212a4e451","Type":"ContainerStarted","Data":"e5a18e94b3fcce80697dd6f9cff79c902cc5942c004636bed30a8f0b05d6f649"} Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.946636 4865 patch_prober.go:28] interesting pod/oauth-openshift-68974c876c-xwnx2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.946736 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" podUID="ab366b4d-d341-4048-8415-b71212a4e451" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.961745 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.961928 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:50:45 crc kubenswrapper[4865]: I0216 22:50:45.982377 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" podStartSLOduration=63.982356395 podStartE2EDuration="1m3.982356395s" podCreationTimestamp="2026-02-16 22:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:50:45.977787626 +0000 UTC m=+286.301494587" watchObservedRunningTime="2026-02-16 22:50:45.982356395 +0000 UTC m=+286.306063356" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.173360 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.173464 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.173517 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.173589 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.173675 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.174197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.174329 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.174399 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.174449 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.190617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.275020 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.275102 4865 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.275123 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.275145 4865 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.275166 4865 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.427704 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.428634 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.446442 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.446513 4865 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="dc986b52-9412-46dc-b558-c5cb2a4a8d77" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.453958 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.454027 4865 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="dc986b52-9412-46dc-b558-c5cb2a4a8d77" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.468084 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.889233 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.953633 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.954016 4865 scope.go:117] "RemoveContainer" containerID="15083fa72a2c5f5b1746ca067e0083785a065e18ff281c809cbbea9434938a74" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.954089 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 22:50:46 crc kubenswrapper[4865]: I0216 22:50:46.963537 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68974c876c-xwnx2" Feb 16 22:50:47 crc kubenswrapper[4865]: I0216 22:50:47.056672 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 22:50:47 crc kubenswrapper[4865]: I0216 22:50:47.351092 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 22:50:47 crc kubenswrapper[4865]: I0216 22:50:47.981080 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 22:50:53 crc kubenswrapper[4865]: E0216 22:50:53.829938 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod42505af0_c68a_4a99_926a_25a0dee244de.slice/crio-1be9c253148b729aca4c97cf7e7056b2e15e463e4bdfa57769e09ed98beb7b45\": RecentStats: unable to find data in memory cache]" Feb 16 22:50:59 crc kubenswrapper[4865]: I0216 22:50:59.061143 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerID="f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc" exitCode=0 Feb 16 22:50:59 crc kubenswrapper[4865]: I0216 22:50:59.061327 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerDied","Data":"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc"} Feb 16 22:50:59 crc kubenswrapper[4865]: I0216 22:50:59.062446 4865 scope.go:117] "RemoveContainer" containerID="f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc" Feb 16 22:51:00 crc kubenswrapper[4865]: I0216 22:51:00.074797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerStarted","Data":"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e"} Feb 16 22:51:00 crc kubenswrapper[4865]: I0216 22:51:00.075669 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:51:00 crc kubenswrapper[4865]: I0216 22:51:00.079980 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:51:00 crc kubenswrapper[4865]: I0216 22:51:00.199390 4865 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 22:51:12 crc kubenswrapper[4865]: I0216 22:51:12.166266 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 22:51:12 crc kubenswrapper[4865]: I0216 22:51:12.168334 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 22:51:12 crc kubenswrapper[4865]: I0216 22:51:12.168387 4865 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="820f60afd9df034363381730b27a77f78012f9de3fb3ec5a9a8efd3670e9744c" exitCode=137 Feb 16 22:51:12 crc kubenswrapper[4865]: I0216 22:51:12.168435 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"820f60afd9df034363381730b27a77f78012f9de3fb3ec5a9a8efd3670e9744c"} Feb 16 22:51:12 crc kubenswrapper[4865]: I0216 22:51:12.168494 4865 scope.go:117] "RemoveContainer" containerID="ef4fbb630ee354131544dade7633424757a1a1b885fec2e271da25984dbec3ac" Feb 16 22:51:13 crc kubenswrapper[4865]: I0216 22:51:13.182990 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 22:51:13 crc kubenswrapper[4865]: I0216 22:51:13.185248 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"abd243c0eca657bbc2880982141e628a8daa2cf72391ce5433411e844ab5a26d"} Feb 16 22:51:20 crc kubenswrapper[4865]: I0216 22:51:20.358416 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:51:21 crc kubenswrapper[4865]: I0216 22:51:21.931191 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:51:21 crc kubenswrapper[4865]: I0216 22:51:21.937776 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:51:22 crc kubenswrapper[4865]: I0216 22:51:22.257923 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.665063 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.665900 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.792872 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.793269 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2l24" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="registry-server" containerID="cri-o://8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415" gracePeriod=30 Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.817016 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.817547 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2z5v" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="registry-server" containerID="cri-o://19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0" gracePeriod=30 Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.833858 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.833937 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.834269 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" containerID="cri-o://99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e" gracePeriod=30 Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.839380 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pggsl"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.840723 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.843105 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.843549 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fsmg7" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="registry-server" containerID="cri-o://7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50" gracePeriod=30 Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.864778 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pggsl"] Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.981579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.981621 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh62l\" (UniqueName: \"kubernetes.io/projected/91cc827b-b0d7-49d3-8c52-99670081f857-kube-api-access-rh62l\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:45 crc kubenswrapper[4865]: I0216 22:51:45.981656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.083508 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.083572 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh62l\" (UniqueName: \"kubernetes.io/projected/91cc827b-b0d7-49d3-8c52-99670081f857-kube-api-access-rh62l\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.083611 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.085916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.091291 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91cc827b-b0d7-49d3-8c52-99670081f857-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.109052 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh62l\" (UniqueName: \"kubernetes.io/projected/91cc827b-b0d7-49d3-8c52-99670081f857-kube-api-access-rh62l\") pod \"marketplace-operator-79b997595-pggsl\" (UID: \"91cc827b-b0d7-49d3-8c52-99670081f857\") " pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.136803 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.227819 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.294817 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.301684 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.325871 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.389071 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content\") pod \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.389129 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmst\" (UniqueName: \"kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst\") pod \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.389335 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities\") pod \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\" (UID: \"5196bfb6-4d27-4d41-8310-8efb2b8997bd\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.390250 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities" (OuterVolumeSpecName: "utilities") pod "5196bfb6-4d27-4d41-8310-8efb2b8997bd" (UID: "5196bfb6-4d27-4d41-8310-8efb2b8997bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.394729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst" (OuterVolumeSpecName: "kube-api-access-hgmst") pod "5196bfb6-4d27-4d41-8310-8efb2b8997bd" (UID: "5196bfb6-4d27-4d41-8310-8efb2b8997bd"). InnerVolumeSpecName "kube-api-access-hgmst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.410315 4865 generic.go:334] "Generic (PLEG): container finished" podID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerID="8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415" exitCode=0 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.410431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerDied","Data":"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.410484 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2l24" event={"ID":"5196bfb6-4d27-4d41-8310-8efb2b8997bd","Type":"ContainerDied","Data":"62c84098b2180eeec3445df4208d78bb46844717a91aef4add99b0a58960783f"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.410516 4865 scope.go:117] "RemoveContainer" containerID="8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.410695 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2l24" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.413211 4865 generic.go:334] "Generic (PLEG): container finished" podID="bea6b458-5aaa-4764-9f82-24ceff943498" containerID="7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50" exitCode=0 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.413276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerDied","Data":"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.413321 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fsmg7" event={"ID":"bea6b458-5aaa-4764-9f82-24ceff943498","Type":"ContainerDied","Data":"8ffa4dd2e0fe7314907ae9f549f56e7fc0d78c2f7cf22b6d3204d3f2a8b31d3c"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.413441 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fsmg7" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.417866 4865 generic.go:334] "Generic (PLEG): container finished" podID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerID="19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0" exitCode=0 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.417970 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2z5v" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.420671 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerID="99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e" exitCode=0 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.420936 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfrgn" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="registry-server" containerID="cri-o://d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b" gracePeriod=30 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.421396 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.441032 4865 scope.go:117] "RemoveContainer" containerID="36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.443869 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerDied","Data":"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.444937 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2z5v" event={"ID":"f6cfa25f-5974-4b2e-9df0-b0e98112b561","Type":"ContainerDied","Data":"11cbe6accc37a7e82a7effa8817560222487bfec3e6354cdafa897929c771b36"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.444957 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerDied","Data":"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.444976 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xrmqm" event={"ID":"99f280f3-e7be-4a87-b8a9-b097ab14d671","Type":"ContainerDied","Data":"0cc89cad6edde4863d59459d260f48530f4f7fcccecfdb1c340b4cc1b0215fb9"} Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.455393 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5196bfb6-4d27-4d41-8310-8efb2b8997bd" (UID: "5196bfb6-4d27-4d41-8310-8efb2b8997bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.461075 4865 scope.go:117] "RemoveContainer" containerID="7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.491546 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca\") pod \"99f280f3-e7be-4a87-b8a9-b097ab14d671\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.491728 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6gg\" (UniqueName: \"kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg\") pod \"99f280f3-e7be-4a87-b8a9-b097ab14d671\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.491849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics\") pod \"99f280f3-e7be-4a87-b8a9-b097ab14d671\" (UID: \"99f280f3-e7be-4a87-b8a9-b097ab14d671\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.491936 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities\") pod \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.492661 4865 scope.go:117] "RemoveContainer" containerID="8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.492923 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content\") pod \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.492970 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities\") pod \"bea6b458-5aaa-4764-9f82-24ceff943498\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.492998 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69spx\" (UniqueName: \"kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx\") pod \"bea6b458-5aaa-4764-9f82-24ceff943498\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493439 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kbr\" (UniqueName: \"kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr\") pod \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\" (UID: \"f6cfa25f-5974-4b2e-9df0-b0e98112b561\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493504 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content\") pod \"bea6b458-5aaa-4764-9f82-24ceff943498\" (UID: \"bea6b458-5aaa-4764-9f82-24ceff943498\") " Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.493539 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415\": container with ID starting with 8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415 not found: ID does not exist" containerID="8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493568 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities" (OuterVolumeSpecName: "utilities") pod "f6cfa25f-5974-4b2e-9df0-b0e98112b561" (UID: "f6cfa25f-5974-4b2e-9df0-b0e98112b561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493608 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415"} err="failed to get container status \"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415\": rpc error: code = NotFound desc = could not find container \"8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415\": container with ID starting with 8756846c21c208ebd1b151e59120d3aa46b3d44f60e55ad3c24218c3544e0415 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493652 4865 scope.go:117] "RemoveContainer" containerID="36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493822 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493844 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493855 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmst\" (UniqueName: \"kubernetes.io/projected/5196bfb6-4d27-4d41-8310-8efb2b8997bd-kube-api-access-hgmst\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493867 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5196bfb6-4d27-4d41-8310-8efb2b8997bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.493933 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "99f280f3-e7be-4a87-b8a9-b097ab14d671" (UID: "99f280f3-e7be-4a87-b8a9-b097ab14d671"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.494077 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities" (OuterVolumeSpecName: "utilities") pod "bea6b458-5aaa-4764-9f82-24ceff943498" (UID: "bea6b458-5aaa-4764-9f82-24ceff943498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.494239 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033\": container with ID starting with 36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033 not found: ID does not exist" containerID="36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.494308 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033"} err="failed to get container status \"36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033\": rpc error: code = NotFound desc = could not find container \"36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033\": container with ID starting with 36b31593f2e5699aeb68dc1b789ba115189f210507a564013d4d5c97d2b2c033 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.494346 4865 scope.go:117] "RemoveContainer" containerID="7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.494739 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe\": container with ID starting with 7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe not found: ID does not exist" containerID="7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.494799 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe"} err="failed to get container status \"7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe\": rpc error: code = NotFound desc = could not find container \"7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe\": container with ID starting with 7d6c51a0389d4713234b067d92de038ac0dbb4029b838d024cc9f0ff0c78f9fe not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.494838 4865 scope.go:117] "RemoveContainer" containerID="7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.495949 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg" (OuterVolumeSpecName: "kube-api-access-zv6gg") pod "99f280f3-e7be-4a87-b8a9-b097ab14d671" (UID: "99f280f3-e7be-4a87-b8a9-b097ab14d671"). InnerVolumeSpecName "kube-api-access-zv6gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.496298 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx" (OuterVolumeSpecName: "kube-api-access-69spx") pod "bea6b458-5aaa-4764-9f82-24ceff943498" (UID: "bea6b458-5aaa-4764-9f82-24ceff943498"). InnerVolumeSpecName "kube-api-access-69spx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.496415 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr" (OuterVolumeSpecName: "kube-api-access-n7kbr") pod "f6cfa25f-5974-4b2e-9df0-b0e98112b561" (UID: "f6cfa25f-5974-4b2e-9df0-b0e98112b561"). InnerVolumeSpecName "kube-api-access-n7kbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.501170 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "99f280f3-e7be-4a87-b8a9-b097ab14d671" (UID: "99f280f3-e7be-4a87-b8a9-b097ab14d671"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.520429 4865 scope.go:117] "RemoveContainer" containerID="048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.539530 4865 scope.go:117] "RemoveContainer" containerID="90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.556647 4865 scope.go:117] "RemoveContainer" containerID="7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.557366 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50\": container with ID starting with 7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50 not found: ID does not exist" containerID="7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.557451 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50"} err="failed to get container status \"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50\": rpc error: code = NotFound desc = could not find container \"7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50\": container with ID starting with 7c735ebe3ec4c759f6b6e7b5802ade6e0366cfde510e030fe018db6def4b8e50 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.557491 4865 scope.go:117] "RemoveContainer" containerID="048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.558033 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4\": container with ID starting with 048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4 not found: ID does not exist" containerID="048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.558089 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4"} err="failed to get container status \"048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4\": rpc error: code = NotFound desc = could not find container \"048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4\": container with ID starting with 048ecf97a3cc95b724e40732b6bc1a9b1f53de615f741ae26a67e4c5c357cdf4 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.558125 4865 scope.go:117] "RemoveContainer" containerID="90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.558485 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8\": container with ID starting with 90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8 not found: ID does not exist" containerID="90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.558515 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8"} err="failed to get container status \"90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8\": rpc error: code = NotFound desc = could not find container \"90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8\": container with ID starting with 90ff4f5fe9f021244df3ac85e4af42aa3cec99b55c22a082cc25c6b34055a4a8 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.558530 4865 scope.go:117] "RemoveContainer" containerID="19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.565060 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6cfa25f-5974-4b2e-9df0-b0e98112b561" (UID: "f6cfa25f-5974-4b2e-9df0-b0e98112b561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.578196 4865 scope.go:117] "RemoveContainer" containerID="a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595042 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cfa25f-5974-4b2e-9df0-b0e98112b561-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595058 4865 scope.go:117] "RemoveContainer" containerID="51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595083 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595213 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69spx\" (UniqueName: \"kubernetes.io/projected/bea6b458-5aaa-4764-9f82-24ceff943498-kube-api-access-69spx\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595231 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kbr\" (UniqueName: \"kubernetes.io/projected/f6cfa25f-5974-4b2e-9df0-b0e98112b561-kube-api-access-n7kbr\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595245 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595260 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv6gg\" (UniqueName: \"kubernetes.io/projected/99f280f3-e7be-4a87-b8a9-b097ab14d671-kube-api-access-zv6gg\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.595273 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/99f280f3-e7be-4a87-b8a9-b097ab14d671-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.613234 4865 scope.go:117] "RemoveContainer" containerID="19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.613903 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0\": container with ID starting with 19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0 not found: ID does not exist" containerID="19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.613936 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0"} err="failed to get container status \"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0\": rpc error: code = NotFound desc = could not find container \"19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0\": container with ID starting with 19494448568ecc5e6314fe13f158c1bbd21c90760cd8e03724a29669de0e14d0 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.613962 4865 scope.go:117] "RemoveContainer" containerID="a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.614421 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4\": container with ID starting with a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4 not found: ID does not exist" containerID="a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.614447 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4"} err="failed to get container status \"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4\": rpc error: code = NotFound desc = could not find container \"a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4\": container with ID starting with a62ef48dbfdbdd86b7d8747fee92cc34d2c9dbd8d8952a5d089a123a60cfe5e4 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.614461 4865 scope.go:117] "RemoveContainer" containerID="51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.614688 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60\": container with ID starting with 51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60 not found: ID does not exist" containerID="51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.614714 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60"} err="failed to get container status \"51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60\": rpc error: code = NotFound desc = could not find container \"51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60\": container with ID starting with 51748eacd830388a0a57d7be4bf8a8ee5524705d11a62a25178bfcd7ae9f7d60 not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.614729 4865 scope.go:117] "RemoveContainer" containerID="99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.634216 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bea6b458-5aaa-4764-9f82-24ceff943498" (UID: "bea6b458-5aaa-4764-9f82-24ceff943498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.635230 4865 scope.go:117] "RemoveContainer" containerID="f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.651603 4865 scope.go:117] "RemoveContainer" containerID="99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.652141 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e\": container with ID starting with 99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e not found: ID does not exist" containerID="99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.652164 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e"} err="failed to get container status \"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e\": rpc error: code = NotFound desc = could not find container \"99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e\": container with ID starting with 99a36e9d9235ff75cd814d2a575670bf5325eba0a9a37611309ea376ce745c6e not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.652182 4865 scope.go:117] "RemoveContainer" containerID="f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc" Feb 16 22:51:46 crc kubenswrapper[4865]: E0216 22:51:46.652626 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc\": container with ID starting with f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc not found: ID does not exist" containerID="f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.652651 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc"} err="failed to get container status \"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc\": rpc error: code = NotFound desc = could not find container \"f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc\": container with ID starting with f8d9cdc07d925d2d67107ff20b1c5026902e3bb3ee16dc9f98d8aa97473829cc not found: ID does not exist" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.691540 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pggsl"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.700862 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bea6b458-5aaa-4764-9f82-24ceff943498-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:46 crc kubenswrapper[4865]: W0216 22:51:46.707457 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91cc827b_b0d7_49d3_8c52_99670081f857.slice/crio-68f51b1232e28fda37fa85c354eb9e5eeac13cbd889b790eeba15a26441c3b30 WatchSource:0}: Error finding container 68f51b1232e28fda37fa85c354eb9e5eeac13cbd889b790eeba15a26441c3b30: Status 404 returned error can't find the container with id 68f51b1232e28fda37fa85c354eb9e5eeac13cbd889b790eeba15a26441c3b30 Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.757978 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.765599 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2l24"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.768590 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.772088 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fsmg7"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.813964 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.817289 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2z5v"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.828953 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.832221 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xrmqm"] Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.842164 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.910322 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content\") pod \"b98a89b4-7f44-411e-a2e3-b260ad781e89\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.910402 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm6f8\" (UniqueName: \"kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8\") pod \"b98a89b4-7f44-411e-a2e3-b260ad781e89\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.910459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities\") pod \"b98a89b4-7f44-411e-a2e3-b260ad781e89\" (UID: \"b98a89b4-7f44-411e-a2e3-b260ad781e89\") " Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.911600 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities" (OuterVolumeSpecName: "utilities") pod "b98a89b4-7f44-411e-a2e3-b260ad781e89" (UID: "b98a89b4-7f44-411e-a2e3-b260ad781e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.916928 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8" (OuterVolumeSpecName: "kube-api-access-jm6f8") pod "b98a89b4-7f44-411e-a2e3-b260ad781e89" (UID: "b98a89b4-7f44-411e-a2e3-b260ad781e89"). InnerVolumeSpecName "kube-api-access-jm6f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:51:46 crc kubenswrapper[4865]: I0216 22:51:46.937575 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b98a89b4-7f44-411e-a2e3-b260ad781e89" (UID: "b98a89b4-7f44-411e-a2e3-b260ad781e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.012096 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.012155 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm6f8\" (UniqueName: \"kubernetes.io/projected/b98a89b4-7f44-411e-a2e3-b260ad781e89-kube-api-access-jm6f8\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.012170 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b98a89b4-7f44-411e-a2e3-b260ad781e89-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.431685 4865 generic.go:334] "Generic (PLEG): container finished" podID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerID="d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b" exitCode=0 Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.431771 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfrgn" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.431766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerDied","Data":"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b"} Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.431862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfrgn" event={"ID":"b98a89b4-7f44-411e-a2e3-b260ad781e89","Type":"ContainerDied","Data":"d887f595c067d0c767517e77303624f5a02e70e51ea053acfe6d4cbc0f2c1260"} Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.431896 4865 scope.go:117] "RemoveContainer" containerID="d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.435713 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" event={"ID":"91cc827b-b0d7-49d3-8c52-99670081f857","Type":"ContainerStarted","Data":"781108cc82cd18f000278752b0c9babc99f85cbd0281bf75407a8358edc75ce9"} Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.435761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" event={"ID":"91cc827b-b0d7-49d3-8c52-99670081f857","Type":"ContainerStarted","Data":"68f51b1232e28fda37fa85c354eb9e5eeac13cbd889b790eeba15a26441c3b30"} Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.436064 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.442815 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.451433 4865 scope.go:117] "RemoveContainer" containerID="050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.466325 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pggsl" podStartSLOduration=2.466283829 podStartE2EDuration="2.466283829s" podCreationTimestamp="2026-02-16 22:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:51:47.465026104 +0000 UTC m=+347.788733065" watchObservedRunningTime="2026-02-16 22:51:47.466283829 +0000 UTC m=+347.789990790" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.504473 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.504973 4865 scope.go:117] "RemoveContainer" containerID="933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.515594 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfrgn"] Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.528811 4865 scope.go:117] "RemoveContainer" containerID="d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b" Feb 16 22:51:47 crc kubenswrapper[4865]: E0216 22:51:47.531692 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b\": container with ID starting with d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b not found: ID does not exist" containerID="d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.531785 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b"} err="failed to get container status \"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b\": rpc error: code = NotFound desc = could not find container \"d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b\": container with ID starting with d8550138f7b577571cd6c3b67faf8fd48f5b2d3237ca3e15e62a9138a1b9e35b not found: ID does not exist" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.531846 4865 scope.go:117] "RemoveContainer" containerID="050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9" Feb 16 22:51:47 crc kubenswrapper[4865]: E0216 22:51:47.532431 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9\": container with ID starting with 050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9 not found: ID does not exist" containerID="050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.532459 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9"} err="failed to get container status \"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9\": rpc error: code = NotFound desc = could not find container \"050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9\": container with ID starting with 050fb68d1614155083d42913c36cd8f7fc78bc22db9fb3165a8a801083fef7f9 not found: ID does not exist" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.532477 4865 scope.go:117] "RemoveContainer" containerID="933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a" Feb 16 22:51:47 crc kubenswrapper[4865]: E0216 22:51:47.532746 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a\": container with ID starting with 933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a not found: ID does not exist" containerID="933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a" Feb 16 22:51:47 crc kubenswrapper[4865]: I0216 22:51:47.532771 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a"} err="failed to get container status \"933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a\": rpc error: code = NotFound desc = could not find container \"933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a\": container with ID starting with 933920988f26799d4c4ee12578b2d67f36a4d4106d11b12716d858e4adb3251a not found: ID does not exist" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013139 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44d2j"] Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013477 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013527 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013546 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013554 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013568 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013577 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013591 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013598 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013607 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013615 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013629 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013637 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013650 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013658 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013666 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013675 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013690 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013698 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013714 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013723 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="extract-content" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013737 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013746 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013754 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013761 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013770 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013778 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="extract-utilities" Feb 16 22:51:48 crc kubenswrapper[4865]: E0216 22:51:48.013790 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013798 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013930 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.013986 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.014002 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.014014 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.014025 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" containerName="registry-server" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.014343 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" containerName="marketplace-operator" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.015165 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.018629 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.025672 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44d2j"] Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.126475 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-catalog-content\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.126546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-utilities\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.126604 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2hz\" (UniqueName: \"kubernetes.io/projected/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-kube-api-access-kb2hz\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.210501 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7pqqn"] Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.211950 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.214233 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-catalog-content\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228245 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-utilities\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228362 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2hz\" (UniqueName: \"kubernetes.io/projected/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-kube-api-access-kb2hz\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-utilities\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-catalog-content\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228506 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf22k\" (UniqueName: \"kubernetes.io/projected/41e68cd1-b151-4f99-b70f-43aced8e8b6d-kube-api-access-vf22k\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228593 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pqqn"] Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.228926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-utilities\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.229066 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-catalog-content\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.251333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2hz\" (UniqueName: \"kubernetes.io/projected/f1e60e96-48c4-4e5e-9561-1bc4ca0aa959-kube-api-access-kb2hz\") pod \"redhat-marketplace-44d2j\" (UID: \"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959\") " pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.330179 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-catalog-content\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.330317 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf22k\" (UniqueName: \"kubernetes.io/projected/41e68cd1-b151-4f99-b70f-43aced8e8b6d-kube-api-access-vf22k\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.330462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-utilities\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.331237 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-utilities\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.331798 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e68cd1-b151-4f99-b70f-43aced8e8b6d-catalog-content\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.333082 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.348356 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf22k\" (UniqueName: \"kubernetes.io/projected/41e68cd1-b151-4f99-b70f-43aced8e8b6d-kube-api-access-vf22k\") pod \"certified-operators-7pqqn\" (UID: \"41e68cd1-b151-4f99-b70f-43aced8e8b6d\") " pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.424097 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5196bfb6-4d27-4d41-8310-8efb2b8997bd" path="/var/lib/kubelet/pods/5196bfb6-4d27-4d41-8310-8efb2b8997bd/volumes" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.425120 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f280f3-e7be-4a87-b8a9-b097ab14d671" path="/var/lib/kubelet/pods/99f280f3-e7be-4a87-b8a9-b097ab14d671/volumes" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.425684 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98a89b4-7f44-411e-a2e3-b260ad781e89" path="/var/lib/kubelet/pods/b98a89b4-7f44-411e-a2e3-b260ad781e89/volumes" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.426956 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea6b458-5aaa-4764-9f82-24ceff943498" path="/var/lib/kubelet/pods/bea6b458-5aaa-4764-9f82-24ceff943498/volumes" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.427732 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cfa25f-5974-4b2e-9df0-b0e98112b561" path="/var/lib/kubelet/pods/f6cfa25f-5974-4b2e-9df0-b0e98112b561/volumes" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.530863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.546329 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44d2j"] Feb 16 22:51:48 crc kubenswrapper[4865]: W0216 22:51:48.568107 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e60e96_48c4_4e5e_9561_1bc4ca0aa959.slice/crio-52052eff7fc99144f5f493080b0d4a8c8f1636715f0d76e71669d2bc5b9d1375 WatchSource:0}: Error finding container 52052eff7fc99144f5f493080b0d4a8c8f1636715f0d76e71669d2bc5b9d1375: Status 404 returned error can't find the container with id 52052eff7fc99144f5f493080b0d4a8c8f1636715f0d76e71669d2bc5b9d1375 Feb 16 22:51:48 crc kubenswrapper[4865]: I0216 22:51:48.946461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pqqn"] Feb 16 22:51:48 crc kubenswrapper[4865]: W0216 22:51:48.951004 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e68cd1_b151_4f99_b70f_43aced8e8b6d.slice/crio-9ebdb0e801ee99e4757e3d5f5cd057a246471eec0be82a5c7d6fdc576ee1c02d WatchSource:0}: Error finding container 9ebdb0e801ee99e4757e3d5f5cd057a246471eec0be82a5c7d6fdc576ee1c02d: Status 404 returned error can't find the container with id 9ebdb0e801ee99e4757e3d5f5cd057a246471eec0be82a5c7d6fdc576ee1c02d Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.453736 4865 generic.go:334] "Generic (PLEG): container finished" podID="f1e60e96-48c4-4e5e-9561-1bc4ca0aa959" containerID="b33233b16f6f16b55afc8396c67ebec94f50344071588cd4959b5d27b33c0217" exitCode=0 Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.453829 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44d2j" event={"ID":"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959","Type":"ContainerDied","Data":"b33233b16f6f16b55afc8396c67ebec94f50344071588cd4959b5d27b33c0217"} Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.453871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44d2j" event={"ID":"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959","Type":"ContainerStarted","Data":"52052eff7fc99144f5f493080b0d4a8c8f1636715f0d76e71669d2bc5b9d1375"} Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.460233 4865 generic.go:334] "Generic (PLEG): container finished" podID="41e68cd1-b151-4f99-b70f-43aced8e8b6d" containerID="15ae52dd823a7493e6ce8f21aa3afd7ade7c124978c13acd3244f65ad4b3382e" exitCode=0 Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.460263 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pqqn" event={"ID":"41e68cd1-b151-4f99-b70f-43aced8e8b6d","Type":"ContainerDied","Data":"15ae52dd823a7493e6ce8f21aa3afd7ade7c124978c13acd3244f65ad4b3382e"} Feb 16 22:51:49 crc kubenswrapper[4865]: I0216 22:51:49.460340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pqqn" event={"ID":"41e68cd1-b151-4f99-b70f-43aced8e8b6d","Type":"ContainerStarted","Data":"9ebdb0e801ee99e4757e3d5f5cd057a246471eec0be82a5c7d6fdc576ee1c02d"} Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.411510 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pvbx9"] Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.414840 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.422581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.427652 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvbx9"] Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.458865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-utilities\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.459543 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-catalog-content\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.459688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w78gp\" (UniqueName: \"kubernetes.io/projected/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-kube-api-access-w78gp\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.467786 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44d2j" event={"ID":"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959","Type":"ContainerStarted","Data":"763db1ebd96c01e8bd81be7de79eb99c5c5fdf195c4f4013e36e8d238ed7d48c"} Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.469633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pqqn" event={"ID":"41e68cd1-b151-4f99-b70f-43aced8e8b6d","Type":"ContainerStarted","Data":"666cf2e80adc4b5478e490922179b54c7b0abb03b6750e4e1032fce8e4753f76"} Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.561258 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w78gp\" (UniqueName: \"kubernetes.io/projected/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-kube-api-access-w78gp\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.561475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-utilities\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.561557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-catalog-content\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.564781 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-utilities\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.564917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-catalog-content\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.602086 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w78gp\" (UniqueName: \"kubernetes.io/projected/d5d5e784-ef08-461d-87c1-f7c1fbe0dcce-kube-api-access-w78gp\") pod \"community-operators-pvbx9\" (UID: \"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce\") " pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.629571 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6rgp"] Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.630859 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.633372 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.652446 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6rgp"] Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.662371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-utilities\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.662417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmchc\" (UniqueName: \"kubernetes.io/projected/7ec6baab-71df-4145-92a7-98fa5a885810-kube-api-access-qmchc\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.662483 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-catalog-content\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.744071 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.763969 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-catalog-content\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.765105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-utilities\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.765157 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmchc\" (UniqueName: \"kubernetes.io/projected/7ec6baab-71df-4145-92a7-98fa5a885810-kube-api-access-qmchc\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.768889 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-utilities\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.769559 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec6baab-71df-4145-92a7-98fa5a885810-catalog-content\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.785718 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmchc\" (UniqueName: \"kubernetes.io/projected/7ec6baab-71df-4145-92a7-98fa5a885810-kube-api-access-qmchc\") pod \"redhat-operators-q6rgp\" (UID: \"7ec6baab-71df-4145-92a7-98fa5a885810\") " pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:50 crc kubenswrapper[4865]: I0216 22:51:50.962411 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pvbx9"] Feb 16 22:51:50 crc kubenswrapper[4865]: W0216 22:51:50.967589 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d5e784_ef08_461d_87c1_f7c1fbe0dcce.slice/crio-206a8eb92efdf876419791749dbb0de0afe46c1dd03580c95c0b6ed3cb336e76 WatchSource:0}: Error finding container 206a8eb92efdf876419791749dbb0de0afe46c1dd03580c95c0b6ed3cb336e76: Status 404 returned error can't find the container with id 206a8eb92efdf876419791749dbb0de0afe46c1dd03580c95c0b6ed3cb336e76 Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.082587 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.282013 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6rgp"] Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.486547 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvbx9" event={"ID":"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce","Type":"ContainerDied","Data":"88e6ca976b1441cef58bcea6e1815a3d8d73a6e2b11392608085b6cb55252116"} Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.487057 4865 generic.go:334] "Generic (PLEG): container finished" podID="d5d5e784-ef08-461d-87c1-f7c1fbe0dcce" containerID="88e6ca976b1441cef58bcea6e1815a3d8d73a6e2b11392608085b6cb55252116" exitCode=0 Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.487128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvbx9" event={"ID":"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce","Type":"ContainerStarted","Data":"206a8eb92efdf876419791749dbb0de0afe46c1dd03580c95c0b6ed3cb336e76"} Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.488756 4865 generic.go:334] "Generic (PLEG): container finished" podID="7ec6baab-71df-4145-92a7-98fa5a885810" containerID="2714bb35015f3eb02f9953e90cf2a26ea6eeae274f1672c77eb8397ad07e8880" exitCode=0 Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.488841 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6rgp" event={"ID":"7ec6baab-71df-4145-92a7-98fa5a885810","Type":"ContainerDied","Data":"2714bb35015f3eb02f9953e90cf2a26ea6eeae274f1672c77eb8397ad07e8880"} Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.488871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6rgp" event={"ID":"7ec6baab-71df-4145-92a7-98fa5a885810","Type":"ContainerStarted","Data":"f086320400843eef7ba9c2651abd6b3d9c4b9ec7f62bc08e3109a97fec048f18"} Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.492753 4865 generic.go:334] "Generic (PLEG): container finished" podID="f1e60e96-48c4-4e5e-9561-1bc4ca0aa959" containerID="763db1ebd96c01e8bd81be7de79eb99c5c5fdf195c4f4013e36e8d238ed7d48c" exitCode=0 Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.492837 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44d2j" event={"ID":"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959","Type":"ContainerDied","Data":"763db1ebd96c01e8bd81be7de79eb99c5c5fdf195c4f4013e36e8d238ed7d48c"} Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.496422 4865 generic.go:334] "Generic (PLEG): container finished" podID="41e68cd1-b151-4f99-b70f-43aced8e8b6d" containerID="666cf2e80adc4b5478e490922179b54c7b0abb03b6750e4e1032fce8e4753f76" exitCode=0 Feb 16 22:51:51 crc kubenswrapper[4865]: I0216 22:51:51.496475 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pqqn" event={"ID":"41e68cd1-b151-4f99-b70f-43aced8e8b6d","Type":"ContainerDied","Data":"666cf2e80adc4b5478e490922179b54c7b0abb03b6750e4e1032fce8e4753f76"} Feb 16 22:51:52 crc kubenswrapper[4865]: I0216 22:51:52.503351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pqqn" event={"ID":"41e68cd1-b151-4f99-b70f-43aced8e8b6d","Type":"ContainerStarted","Data":"bd4be5f909500520113bf217d8ba38b53cf914530c04c633d131dde9bb80af5b"} Feb 16 22:51:52 crc kubenswrapper[4865]: I0216 22:51:52.506414 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvbx9" event={"ID":"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce","Type":"ContainerStarted","Data":"ff18d9b8d3f31e40419b960c2a6cccf6c853c604e6e40e38165de48874474697"} Feb 16 22:51:52 crc kubenswrapper[4865]: I0216 22:51:52.509982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44d2j" event={"ID":"f1e60e96-48c4-4e5e-9561-1bc4ca0aa959","Type":"ContainerStarted","Data":"276d35509752fa1c6ae4469f1cd3c6889b8a4c19d502bbb601fd7dd39c3f4024"} Feb 16 22:51:52 crc kubenswrapper[4865]: I0216 22:51:52.534761 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7pqqn" podStartSLOduration=2.047928343 podStartE2EDuration="4.534730961s" podCreationTimestamp="2026-02-16 22:51:48 +0000 UTC" firstStartedPulling="2026-02-16 22:51:49.46210599 +0000 UTC m=+349.785812951" lastFinishedPulling="2026-02-16 22:51:51.948908608 +0000 UTC m=+352.272615569" observedRunningTime="2026-02-16 22:51:52.531114389 +0000 UTC m=+352.854821360" watchObservedRunningTime="2026-02-16 22:51:52.534730961 +0000 UTC m=+352.858437922" Feb 16 22:51:52 crc kubenswrapper[4865]: I0216 22:51:52.559137 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44d2j" podStartSLOduration=3.076448119 podStartE2EDuration="5.55911402s" podCreationTimestamp="2026-02-16 22:51:47 +0000 UTC" firstStartedPulling="2026-02-16 22:51:49.45573787 +0000 UTC m=+349.779444831" lastFinishedPulling="2026-02-16 22:51:51.938403771 +0000 UTC m=+352.262110732" observedRunningTime="2026-02-16 22:51:52.555905549 +0000 UTC m=+352.879612520" watchObservedRunningTime="2026-02-16 22:51:52.55911402 +0000 UTC m=+352.882820981" Feb 16 22:51:53 crc kubenswrapper[4865]: I0216 22:51:53.519682 4865 generic.go:334] "Generic (PLEG): container finished" podID="7ec6baab-71df-4145-92a7-98fa5a885810" containerID="5ea69fba177c59d861b6cb2b3a7a5cbb13d007a9d7eed3a4ab19a727fed1c23d" exitCode=0 Feb 16 22:51:53 crc kubenswrapper[4865]: I0216 22:51:53.519772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6rgp" event={"ID":"7ec6baab-71df-4145-92a7-98fa5a885810","Type":"ContainerDied","Data":"5ea69fba177c59d861b6cb2b3a7a5cbb13d007a9d7eed3a4ab19a727fed1c23d"} Feb 16 22:51:53 crc kubenswrapper[4865]: I0216 22:51:53.523191 4865 generic.go:334] "Generic (PLEG): container finished" podID="d5d5e784-ef08-461d-87c1-f7c1fbe0dcce" containerID="ff18d9b8d3f31e40419b960c2a6cccf6c853c604e6e40e38165de48874474697" exitCode=0 Feb 16 22:51:53 crc kubenswrapper[4865]: I0216 22:51:53.523379 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvbx9" event={"ID":"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce","Type":"ContainerDied","Data":"ff18d9b8d3f31e40419b960c2a6cccf6c853c604e6e40e38165de48874474697"} Feb 16 22:51:54 crc kubenswrapper[4865]: I0216 22:51:54.533459 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pvbx9" event={"ID":"d5d5e784-ef08-461d-87c1-f7c1fbe0dcce","Type":"ContainerStarted","Data":"25aba3a24256045196e05f8b941fcd6fd784d8cd3d45676ffb11fb7a05b10347"} Feb 16 22:51:54 crc kubenswrapper[4865]: I0216 22:51:54.536806 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6rgp" event={"ID":"7ec6baab-71df-4145-92a7-98fa5a885810","Type":"ContainerStarted","Data":"9623d8898e986738615438ce3da2b73474c3b3efd108bf543ae51c1fa2f367eb"} Feb 16 22:51:54 crc kubenswrapper[4865]: I0216 22:51:54.557475 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pvbx9" podStartSLOduration=2.125300949 podStartE2EDuration="4.557450922s" podCreationTimestamp="2026-02-16 22:51:50 +0000 UTC" firstStartedPulling="2026-02-16 22:51:51.487902689 +0000 UTC m=+351.811609650" lastFinishedPulling="2026-02-16 22:51:53.920052662 +0000 UTC m=+354.243759623" observedRunningTime="2026-02-16 22:51:54.55631962 +0000 UTC m=+354.880026601" watchObservedRunningTime="2026-02-16 22:51:54.557450922 +0000 UTC m=+354.881157893" Feb 16 22:51:54 crc kubenswrapper[4865]: I0216 22:51:54.575792 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6rgp" podStartSLOduration=2.172280285 podStartE2EDuration="4.575761819s" podCreationTimestamp="2026-02-16 22:51:50 +0000 UTC" firstStartedPulling="2026-02-16 22:51:51.491809519 +0000 UTC m=+351.815516490" lastFinishedPulling="2026-02-16 22:51:53.895291053 +0000 UTC m=+354.218998024" observedRunningTime="2026-02-16 22:51:54.575622115 +0000 UTC m=+354.899329096" watchObservedRunningTime="2026-02-16 22:51:54.575761819 +0000 UTC m=+354.899468780" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.333846 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.334702 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.384599 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.532344 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.532423 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.595543 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.627863 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44d2j" Feb 16 22:51:58 crc kubenswrapper[4865]: I0216 22:51:58.657362 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7pqqn" Feb 16 22:52:00 crc kubenswrapper[4865]: I0216 22:52:00.744969 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:52:00 crc kubenswrapper[4865]: I0216 22:52:00.745327 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:52:00 crc kubenswrapper[4865]: I0216 22:52:00.799271 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:52:01 crc kubenswrapper[4865]: I0216 22:52:01.083727 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:52:01 crc kubenswrapper[4865]: I0216 22:52:01.083850 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:52:01 crc kubenswrapper[4865]: I0216 22:52:01.129684 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:52:01 crc kubenswrapper[4865]: I0216 22:52:01.627825 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pvbx9" Feb 16 22:52:01 crc kubenswrapper[4865]: I0216 22:52:01.629461 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6rgp" Feb 16 22:52:15 crc kubenswrapper[4865]: I0216 22:52:15.663964 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:52:15 crc kubenswrapper[4865]: I0216 22:52:15.664741 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.664802 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.665602 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.665682 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.666730 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.666853 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2" gracePeriod=600 Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.945648 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2" exitCode=0 Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.945711 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2"} Feb 16 22:52:45 crc kubenswrapper[4865]: I0216 22:52:45.945762 4865 scope.go:117] "RemoveContainer" containerID="1c0be88c4425b343893860e323d0e9c1a661e3cd0cadad6714da67ebadf57bf7" Feb 16 22:52:46 crc kubenswrapper[4865]: I0216 22:52:46.957001 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032"} Feb 16 22:54:45 crc kubenswrapper[4865]: I0216 22:54:45.674159 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:54:45 crc kubenswrapper[4865]: I0216 22:54:45.675193 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:55:15 crc kubenswrapper[4865]: I0216 22:55:15.665164 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:55:15 crc kubenswrapper[4865]: I0216 22:55:15.666249 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:55:45 crc kubenswrapper[4865]: I0216 22:55:45.664403 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:55:45 crc kubenswrapper[4865]: I0216 22:55:45.665809 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:55:45 crc kubenswrapper[4865]: I0216 22:55:45.665921 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:55:45 crc kubenswrapper[4865]: I0216 22:55:45.667007 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:55:45 crc kubenswrapper[4865]: I0216 22:55:45.667117 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032" gracePeriod=600 Feb 16 22:55:46 crc kubenswrapper[4865]: I0216 22:55:46.311571 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032" exitCode=0 Feb 16 22:55:46 crc kubenswrapper[4865]: I0216 22:55:46.311701 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032"} Feb 16 22:55:46 crc kubenswrapper[4865]: I0216 22:55:46.312146 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46"} Feb 16 22:55:46 crc kubenswrapper[4865]: I0216 22:55:46.312190 4865 scope.go:117] "RemoveContainer" containerID="e7de83ac43da38cfccce6848cb70ac59cc1a5534f39d3ff21ed4e0cd830ffbe2" Feb 16 22:56:33 crc kubenswrapper[4865]: I0216 22:56:33.900859 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vzmtx"] Feb 16 22:56:33 crc kubenswrapper[4865]: I0216 22:56:33.903112 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:33 crc kubenswrapper[4865]: I0216 22:56:33.922394 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vzmtx"] Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055224 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-trusted-ca\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055357 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-registry-certificates\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055428 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2tt\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-kube-api-access-9k2tt\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d16a200-e468-410e-9911-42aadc4a8786-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-bound-sa-token\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055713 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d16a200-e468-410e-9911-42aadc4a8786-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.055774 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-registry-tls\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.084631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.156915 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-registry-certificates\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.156986 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2tt\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-kube-api-access-9k2tt\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157039 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d16a200-e468-410e-9911-42aadc4a8786-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-bound-sa-token\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157175 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d16a200-e468-410e-9911-42aadc4a8786-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-registry-tls\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-trusted-ca\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.157971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d16a200-e468-410e-9911-42aadc4a8786-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.158670 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-registry-certificates\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.159484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d16a200-e468-410e-9911-42aadc4a8786-trusted-ca\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.165999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-registry-tls\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.166444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d16a200-e468-410e-9911-42aadc4a8786-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.182212 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2tt\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-kube-api-access-9k2tt\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.188739 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d16a200-e468-410e-9911-42aadc4a8786-bound-sa-token\") pod \"image-registry-66df7c8f76-vzmtx\" (UID: \"7d16a200-e468-410e-9911-42aadc4a8786\") " pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.225396 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.460786 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vzmtx"] Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.695387 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" event={"ID":"7d16a200-e468-410e-9911-42aadc4a8786","Type":"ContainerStarted","Data":"721bcad6a1c44a184e157cbf6cd66bd6ac4532bb8bca6e7a40a2ef806b21bfae"} Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.697472 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" event={"ID":"7d16a200-e468-410e-9911-42aadc4a8786","Type":"ContainerStarted","Data":"dce9e253ac55eb61cebeb8a3e891806b4d6d1405a74f8b23739537086fd52278"} Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.699454 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:34 crc kubenswrapper[4865]: I0216 22:56:34.731173 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" podStartSLOduration=1.731128051 podStartE2EDuration="1.731128051s" podCreationTimestamp="2026-02-16 22:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:56:34.723869192 +0000 UTC m=+635.047576163" watchObservedRunningTime="2026-02-16 22:56:34.731128051 +0000 UTC m=+635.054835062" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.809648 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8wctr"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.811632 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8wctr" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.814814 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nwktn"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.816015 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.817540 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.817796 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.818231 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d4dpl" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.818827 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qpz67" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.833650 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nwktn"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.842796 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8wctr"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.847071 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pzb4b"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.847923 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.850529 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-75p5m" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.869611 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pzb4b"] Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.978357 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478k4\" (UniqueName: \"kubernetes.io/projected/bbc85b0c-aae5-4657-8c81-fed6b49e5d5d-kube-api-access-478k4\") pod \"cert-manager-webhook-687f57d79b-pzb4b\" (UID: \"bbc85b0c-aae5-4657-8c81-fed6b49e5d5d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.978429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbf5\" (UniqueName: \"kubernetes.io/projected/799f3815-d78f-449e-b798-63000e62d953-kube-api-access-mnbf5\") pod \"cert-manager-cainjector-cf98fcc89-nwktn\" (UID: \"799f3815-d78f-449e-b798-63000e62d953\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" Feb 16 22:56:40 crc kubenswrapper[4865]: I0216 22:56:40.978478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7c6\" (UniqueName: \"kubernetes.io/projected/bde79990-dee1-4694-bf0c-f569702b84c6-kube-api-access-tf7c6\") pod \"cert-manager-858654f9db-8wctr\" (UID: \"bde79990-dee1-4694-bf0c-f569702b84c6\") " pod="cert-manager/cert-manager-858654f9db-8wctr" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.079969 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478k4\" (UniqueName: \"kubernetes.io/projected/bbc85b0c-aae5-4657-8c81-fed6b49e5d5d-kube-api-access-478k4\") pod \"cert-manager-webhook-687f57d79b-pzb4b\" (UID: \"bbc85b0c-aae5-4657-8c81-fed6b49e5d5d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.080155 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbf5\" (UniqueName: \"kubernetes.io/projected/799f3815-d78f-449e-b798-63000e62d953-kube-api-access-mnbf5\") pod \"cert-manager-cainjector-cf98fcc89-nwktn\" (UID: \"799f3815-d78f-449e-b798-63000e62d953\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.080372 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7c6\" (UniqueName: \"kubernetes.io/projected/bde79990-dee1-4694-bf0c-f569702b84c6-kube-api-access-tf7c6\") pod \"cert-manager-858654f9db-8wctr\" (UID: \"bde79990-dee1-4694-bf0c-f569702b84c6\") " pod="cert-manager/cert-manager-858654f9db-8wctr" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.109786 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbf5\" (UniqueName: \"kubernetes.io/projected/799f3815-d78f-449e-b798-63000e62d953-kube-api-access-mnbf5\") pod \"cert-manager-cainjector-cf98fcc89-nwktn\" (UID: \"799f3815-d78f-449e-b798-63000e62d953\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.111116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7c6\" (UniqueName: \"kubernetes.io/projected/bde79990-dee1-4694-bf0c-f569702b84c6-kube-api-access-tf7c6\") pod \"cert-manager-858654f9db-8wctr\" (UID: \"bde79990-dee1-4694-bf0c-f569702b84c6\") " pod="cert-manager/cert-manager-858654f9db-8wctr" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.111297 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478k4\" (UniqueName: \"kubernetes.io/projected/bbc85b0c-aae5-4657-8c81-fed6b49e5d5d-kube-api-access-478k4\") pod \"cert-manager-webhook-687f57d79b-pzb4b\" (UID: \"bbc85b0c-aae5-4657-8c81-fed6b49e5d5d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.161520 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8wctr" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.172617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.179768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.405402 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8wctr"] Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.418762 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.447033 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nwktn"] Feb 16 22:56:41 crc kubenswrapper[4865]: W0216 22:56:41.451237 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod799f3815_d78f_449e_b798_63000e62d953.slice/crio-c04a4fa0be0d0c3cb73994d13b62bd7147ba2cf036eb4b7b90ecfdb2bd5f807c WatchSource:0}: Error finding container c04a4fa0be0d0c3cb73994d13b62bd7147ba2cf036eb4b7b90ecfdb2bd5f807c: Status 404 returned error can't find the container with id c04a4fa0be0d0c3cb73994d13b62bd7147ba2cf036eb4b7b90ecfdb2bd5f807c Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.478831 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pzb4b"] Feb 16 22:56:41 crc kubenswrapper[4865]: W0216 22:56:41.486974 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc85b0c_aae5_4657_8c81_fed6b49e5d5d.slice/crio-eafa4604b81b740d971f8685736c0ba091a94b7370005be5744ba0ba38e390e8 WatchSource:0}: Error finding container eafa4604b81b740d971f8685736c0ba091a94b7370005be5744ba0ba38e390e8: Status 404 returned error can't find the container with id eafa4604b81b740d971f8685736c0ba091a94b7370005be5744ba0ba38e390e8 Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.765970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" event={"ID":"799f3815-d78f-449e-b798-63000e62d953","Type":"ContainerStarted","Data":"c04a4fa0be0d0c3cb73994d13b62bd7147ba2cf036eb4b7b90ecfdb2bd5f807c"} Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.767918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8wctr" event={"ID":"bde79990-dee1-4694-bf0c-f569702b84c6","Type":"ContainerStarted","Data":"dbf362f1579bddf1b7d05614d7a9764299ba94db9cc4cce8842d07cb366ae5d6"} Feb 16 22:56:41 crc kubenswrapper[4865]: I0216 22:56:41.769144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" event={"ID":"bbc85b0c-aae5-4657-8c81-fed6b49e5d5d","Type":"ContainerStarted","Data":"eafa4604b81b740d971f8685736c0ba091a94b7370005be5744ba0ba38e390e8"} Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.801402 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" event={"ID":"bbc85b0c-aae5-4657-8c81-fed6b49e5d5d","Type":"ContainerStarted","Data":"b9e8b905a7ed090c753e8a5df61cabaeda2927c1e5e1d20d647b354f0e60420c"} Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.802802 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.810096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" event={"ID":"799f3815-d78f-449e-b798-63000e62d953","Type":"ContainerStarted","Data":"39b646781201d1998060b3e3f64d360361ced0348401cd60a619b63a482363f1"} Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.812183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8wctr" event={"ID":"bde79990-dee1-4694-bf0c-f569702b84c6","Type":"ContainerStarted","Data":"e8e6cb85920227fb58d9818b93ab90ff4235c2412e8462ebeffde8edfd8d0e10"} Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.828646 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" podStartSLOduration=2.1513571479999998 podStartE2EDuration="5.828622326s" podCreationTimestamp="2026-02-16 22:56:40 +0000 UTC" firstStartedPulling="2026-02-16 22:56:41.489816144 +0000 UTC m=+641.813523105" lastFinishedPulling="2026-02-16 22:56:45.167081312 +0000 UTC m=+645.490788283" observedRunningTime="2026-02-16 22:56:45.821494071 +0000 UTC m=+646.145201042" watchObservedRunningTime="2026-02-16 22:56:45.828622326 +0000 UTC m=+646.152329297" Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.848098 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nwktn" podStartSLOduration=2.121877659 podStartE2EDuration="5.848070556s" podCreationTimestamp="2026-02-16 22:56:40 +0000 UTC" firstStartedPulling="2026-02-16 22:56:41.45356851 +0000 UTC m=+641.777275471" lastFinishedPulling="2026-02-16 22:56:45.179761397 +0000 UTC m=+645.503468368" observedRunningTime="2026-02-16 22:56:45.842417894 +0000 UTC m=+646.166124895" watchObservedRunningTime="2026-02-16 22:56:45.848070556 +0000 UTC m=+646.171777527" Feb 16 22:56:45 crc kubenswrapper[4865]: I0216 22:56:45.858551 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8wctr" podStartSLOduration=2.108576417 podStartE2EDuration="5.858517087s" podCreationTimestamp="2026-02-16 22:56:40 +0000 UTC" firstStartedPulling="2026-02-16 22:56:41.418441569 +0000 UTC m=+641.742148530" lastFinishedPulling="2026-02-16 22:56:45.168382229 +0000 UTC m=+645.492089200" observedRunningTime="2026-02-16 22:56:45.857173409 +0000 UTC m=+646.180880390" watchObservedRunningTime="2026-02-16 22:56:45.858517087 +0000 UTC m=+646.182224068" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.715445 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v9gjl"] Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.716991 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="northd" containerID="cri-o://48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.716977 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-node" containerID="cri-o://f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.716977 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.717230 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="sbdb" containerID="cri-o://dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.717310 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-acl-logging" containerID="cri-o://3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.716901 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-controller" containerID="cri-o://decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.717460 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="nbdb" containerID="cri-o://4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.784883 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" containerID="cri-o://fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" gracePeriod=30 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.866534 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/2.log" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.867042 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/1.log" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.867091 4865 generic.go:334] "Generic (PLEG): container finished" podID="518e6107-6873-4bd2-86a6-e422763483ec" containerID="02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2" exitCode=2 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.867173 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerDied","Data":"02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2"} Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.867236 4865 scope.go:117] "RemoveContainer" containerID="74e1013d02cb01d60662d6619c25c7a40de7bd7bde7f0239007c54dba55879cb" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.868073 4865 scope.go:117] "RemoveContainer" containerID="02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2" Feb 16 22:56:50 crc kubenswrapper[4865]: E0216 22:56:50.868486 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tqmsq_openshift-multus(518e6107-6873-4bd2-86a6-e422763483ec)\"" pod="openshift-multus/multus-tqmsq" podUID="518e6107-6873-4bd2-86a6-e422763483ec" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.889532 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.891934 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-acl-logging/0.log" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.894688 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-controller/0.log" Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897737 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" exitCode=0 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897788 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" exitCode=143 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897797 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" exitCode=143 Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897879 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} Feb 16 22:56:50 crc kubenswrapper[4865]: I0216 22:56:50.897891 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.078919 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.081547 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-acl-logging/0.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.082047 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-controller/0.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.082522 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.140719 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8675j"] Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141018 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141039 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141058 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141067 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141077 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-acl-logging" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141087 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-acl-logging" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141102 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141112 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141127 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="sbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141135 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="sbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141146 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-node" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141154 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-node" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141171 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kubecfg-setup" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141181 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kubecfg-setup" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141191 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="northd" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141199 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="northd" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141212 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141220 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141231 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141257 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141296 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="nbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141305 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="nbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141313 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141321 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141485 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141497 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141509 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="nbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141524 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="northd" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141536 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="sbdb" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141548 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141558 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141569 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141580 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovn-acl-logging" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141591 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141601 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="kube-rbac-proxy-node" Feb 16 22:56:51 crc kubenswrapper[4865]: E0216 22:56:51.141721 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141732 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.141861 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerName="ovnkube-controller" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.144112 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.189350 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199182 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199288 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199330 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199388 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199444 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199471 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199504 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199528 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199554 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199582 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199623 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199649 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199676 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199715 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199739 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxp7s\" (UniqueName: \"kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199756 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199958 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.199787 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib\") pod \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\" (UID: \"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9\") " Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200418 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200519 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-bin\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-etc-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200621 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-kubelet\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-slash\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvg9\" (UniqueName: \"kubernetes.io/projected/e5994d2f-5ff8-4aea-a269-caed13a42cdf-kube-api-access-7qvg9\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200704 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-netns\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200735 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200759 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-ovn\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200794 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-systemd-units\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200837 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200859 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200886 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovn-node-metrics-cert\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.200960 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-config\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201020 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-netd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-node-log\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-env-overrides\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-script-lib\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201138 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-systemd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201178 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket" (OuterVolumeSpecName: "log-socket") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201201 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-var-lib-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201222 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-log-socket\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201291 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201306 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201316 4865 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201327 4865 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201338 4865 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201379 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201542 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash" (OuterVolumeSpecName: "host-slash") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201567 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log" (OuterVolumeSpecName: "node-log") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201587 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201607 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201670 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.201784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.202024 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.213510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s" (OuterVolumeSpecName: "kube-api-access-fxp7s") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "kube-api-access-fxp7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.213947 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.231049 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.231084 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" (UID: "2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301711 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-config\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301769 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-netd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301792 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-node-log\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301815 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-env-overrides\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301833 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-script-lib\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-systemd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301883 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-log-socket\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301903 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-var-lib-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301920 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-bin\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-node-log\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-etc-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-kubelet\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-var-lib-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-kubelet\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.301950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-netd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302037 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-slash\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302062 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvg9\" (UniqueName: \"kubernetes.io/projected/e5994d2f-5ff8-4aea-a269-caed13a42cdf-kube-api-access-7qvg9\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302052 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-systemd\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302091 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-netns\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302144 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-netns\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302163 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-slash\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302230 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-cni-bin\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-etc-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302376 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-log-socket\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-run-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302441 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-ovn\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302479 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-ovn\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-systemd-units\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302597 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovn-node-metrics-cert\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-run-openvswitch\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e5994d2f-5ff8-4aea-a269-caed13a42cdf-systemd-units\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302667 4865 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302683 4865 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302698 4865 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302709 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302723 4865 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302736 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302754 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302765 4865 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302776 4865 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302786 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302796 4865 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302806 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302818 4865 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302832 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxp7s\" (UniqueName: \"kubernetes.io/projected/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-kube-api-access-fxp7s\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.302846 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.303691 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-env-overrides\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.303743 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-config\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.303823 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovnkube-script-lib\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.305674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5994d2f-5ff8-4aea-a269-caed13a42cdf-ovn-node-metrics-cert\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.319067 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvg9\" (UniqueName: \"kubernetes.io/projected/e5994d2f-5ff8-4aea-a269-caed13a42cdf-kube-api-access-7qvg9\") pod \"ovnkube-node-8675j\" (UID: \"e5994d2f-5ff8-4aea-a269-caed13a42cdf\") " pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.458859 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.908638 4865 generic.go:334] "Generic (PLEG): container finished" podID="e5994d2f-5ff8-4aea-a269-caed13a42cdf" containerID="d0dfeb886302220f8460019381f44a35d5bbc584f71b3766d801a26c393a0350" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.908776 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerDied","Data":"d0dfeb886302220f8460019381f44a35d5bbc584f71b3766d801a26c393a0350"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.908879 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"061a08e20694151074c315c6562fffcc49effb5df77317e9b3feafbf1e5c3137"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.913071 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/2.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.920606 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovnkube-controller/3.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.924726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-acl-logging/0.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.925988 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-v9gjl_2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/ovn-controller/0.log" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926723 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926782 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926805 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926831 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926852 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" exitCode=0 Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.926985 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927009 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927034 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927059 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" event={"ID":"2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9","Type":"ContainerDied","Data":"6662afa376d68a2cb0a905370750c9271f44dcc2b59b807b8b697ab3a0e370c1"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927085 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927111 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927127 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927141 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927159 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927173 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927187 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927202 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927215 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927245 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.927642 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9gjl" Feb 16 22:56:51 crc kubenswrapper[4865]: I0216 22:56:51.968170 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.001255 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v9gjl"] Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.001940 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v9gjl"] Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.049556 4865 scope.go:117] "RemoveContainer" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.102528 4865 scope.go:117] "RemoveContainer" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.129946 4865 scope.go:117] "RemoveContainer" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.145293 4865 scope.go:117] "RemoveContainer" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.163912 4865 scope.go:117] "RemoveContainer" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.206839 4865 scope.go:117] "RemoveContainer" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.230223 4865 scope.go:117] "RemoveContainer" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.250560 4865 scope.go:117] "RemoveContainer" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.283137 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.283806 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.283869 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} err="failed to get container status \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.283900 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.286904 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": container with ID starting with be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4 not found: ID does not exist" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.286946 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} err="failed to get container status \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": rpc error: code = NotFound desc = could not find container \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": container with ID starting with be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.286970 4865 scope.go:117] "RemoveContainer" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.287487 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": container with ID starting with dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239 not found: ID does not exist" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.287536 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} err="failed to get container status \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": rpc error: code = NotFound desc = could not find container \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": container with ID starting with dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.287571 4865 scope.go:117] "RemoveContainer" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.287865 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": container with ID starting with 4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f not found: ID does not exist" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.287902 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} err="failed to get container status \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": rpc error: code = NotFound desc = could not find container \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": container with ID starting with 4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.287930 4865 scope.go:117] "RemoveContainer" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.288182 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": container with ID starting with 48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0 not found: ID does not exist" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.288212 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} err="failed to get container status \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": rpc error: code = NotFound desc = could not find container \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": container with ID starting with 48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.288233 4865 scope.go:117] "RemoveContainer" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.288623 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": container with ID starting with fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67 not found: ID does not exist" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.288659 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} err="failed to get container status \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": rpc error: code = NotFound desc = could not find container \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": container with ID starting with fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.288683 4865 scope.go:117] "RemoveContainer" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.289012 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": container with ID starting with f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12 not found: ID does not exist" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289039 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} err="failed to get container status \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": rpc error: code = NotFound desc = could not find container \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": container with ID starting with f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289057 4865 scope.go:117] "RemoveContainer" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.289343 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": container with ID starting with 3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51 not found: ID does not exist" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289370 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} err="failed to get container status \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": rpc error: code = NotFound desc = could not find container \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": container with ID starting with 3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289389 4865 scope.go:117] "RemoveContainer" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.289753 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": container with ID starting with decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0 not found: ID does not exist" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289781 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} err="failed to get container status \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": rpc error: code = NotFound desc = could not find container \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": container with ID starting with decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.289800 4865 scope.go:117] "RemoveContainer" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: E0216 22:56:52.290503 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": container with ID starting with 8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4 not found: ID does not exist" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.290531 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} err="failed to get container status \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": rpc error: code = NotFound desc = could not find container \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": container with ID starting with 8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.290549 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.290807 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} err="failed to get container status \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.290827 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291174 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} err="failed to get container status \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": rpc error: code = NotFound desc = could not find container \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": container with ID starting with be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291204 4865 scope.go:117] "RemoveContainer" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291597 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} err="failed to get container status \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": rpc error: code = NotFound desc = could not find container \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": container with ID starting with dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291626 4865 scope.go:117] "RemoveContainer" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291894 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} err="failed to get container status \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": rpc error: code = NotFound desc = could not find container \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": container with ID starting with 4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.291919 4865 scope.go:117] "RemoveContainer" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292290 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} err="failed to get container status \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": rpc error: code = NotFound desc = could not find container \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": container with ID starting with 48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292318 4865 scope.go:117] "RemoveContainer" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292634 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} err="failed to get container status \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": rpc error: code = NotFound desc = could not find container \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": container with ID starting with fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292652 4865 scope.go:117] "RemoveContainer" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292886 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} err="failed to get container status \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": rpc error: code = NotFound desc = could not find container \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": container with ID starting with f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.292913 4865 scope.go:117] "RemoveContainer" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293299 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} err="failed to get container status \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": rpc error: code = NotFound desc = could not find container \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": container with ID starting with 3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293326 4865 scope.go:117] "RemoveContainer" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293595 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} err="failed to get container status \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": rpc error: code = NotFound desc = could not find container \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": container with ID starting with decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293619 4865 scope.go:117] "RemoveContainer" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293873 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} err="failed to get container status \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": rpc error: code = NotFound desc = could not find container \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": container with ID starting with 8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.293895 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294132 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} err="failed to get container status \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294150 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294452 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} err="failed to get container status \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": rpc error: code = NotFound desc = could not find container \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": container with ID starting with be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294483 4865 scope.go:117] "RemoveContainer" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294720 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} err="failed to get container status \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": rpc error: code = NotFound desc = could not find container \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": container with ID starting with dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294741 4865 scope.go:117] "RemoveContainer" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.294994 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} err="failed to get container status \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": rpc error: code = NotFound desc = could not find container \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": container with ID starting with 4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295019 4865 scope.go:117] "RemoveContainer" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295235 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} err="failed to get container status \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": rpc error: code = NotFound desc = could not find container \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": container with ID starting with 48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295257 4865 scope.go:117] "RemoveContainer" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295641 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} err="failed to get container status \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": rpc error: code = NotFound desc = could not find container \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": container with ID starting with fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295663 4865 scope.go:117] "RemoveContainer" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295961 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} err="failed to get container status \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": rpc error: code = NotFound desc = could not find container \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": container with ID starting with f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.295980 4865 scope.go:117] "RemoveContainer" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296254 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} err="failed to get container status \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": rpc error: code = NotFound desc = could not find container \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": container with ID starting with 3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296301 4865 scope.go:117] "RemoveContainer" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296688 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} err="failed to get container status \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": rpc error: code = NotFound desc = could not find container \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": container with ID starting with decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296712 4865 scope.go:117] "RemoveContainer" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296971 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} err="failed to get container status \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": rpc error: code = NotFound desc = could not find container \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": container with ID starting with 8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.296997 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297243 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} err="failed to get container status \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297267 4865 scope.go:117] "RemoveContainer" containerID="be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297474 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4"} err="failed to get container status \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": rpc error: code = NotFound desc = could not find container \"be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4\": container with ID starting with be9325937ac87ec492489ef34007a1e234ff7037a0f05d40bd102a85a86cc3c4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297496 4865 scope.go:117] "RemoveContainer" containerID="dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297805 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239"} err="failed to get container status \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": rpc error: code = NotFound desc = could not find container \"dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239\": container with ID starting with dee85f35d78a0b30018c179a53dd39cbb1b0edfa9897290cd13fdc9680461239 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.297826 4865 scope.go:117] "RemoveContainer" containerID="4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298107 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f"} err="failed to get container status \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": rpc error: code = NotFound desc = could not find container \"4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f\": container with ID starting with 4a1d5f61857de97cd302040f99ba0cf8f6aa30fa344ee9bb5d486222bb59280f not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298130 4865 scope.go:117] "RemoveContainer" containerID="48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298528 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0"} err="failed to get container status \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": rpc error: code = NotFound desc = could not find container \"48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0\": container with ID starting with 48168de44ee347280252a85da7abba068922ce7558bd01c8e1d11af13f305dd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298551 4865 scope.go:117] "RemoveContainer" containerID="fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298856 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67"} err="failed to get container status \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": rpc error: code = NotFound desc = could not find container \"fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67\": container with ID starting with fdf0a2e5286d172d17780b0ba829a7b1365b3fcb130069d7497b207041e54d67 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.298876 4865 scope.go:117] "RemoveContainer" containerID="f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299100 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12"} err="failed to get container status \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": rpc error: code = NotFound desc = could not find container \"f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12\": container with ID starting with f763f86f21e0ae26d6da8694383bf65bf9d2ee2cc3f6331559455f18c1a5ea12 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299120 4865 scope.go:117] "RemoveContainer" containerID="3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299468 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51"} err="failed to get container status \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": rpc error: code = NotFound desc = could not find container \"3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51\": container with ID starting with 3fbb61043059110013736d3d693e6d602b199a4c62907436a4eb88aae192cd51 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299490 4865 scope.go:117] "RemoveContainer" containerID="decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299683 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0"} err="failed to get container status \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": rpc error: code = NotFound desc = could not find container \"decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0\": container with ID starting with decf362b373d3e8cc82caf2afe0405a61ec72f794680345ee6bd4d7acee32cd0 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299703 4865 scope.go:117] "RemoveContainer" containerID="8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299929 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4"} err="failed to get container status \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": rpc error: code = NotFound desc = could not find container \"8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4\": container with ID starting with 8fffded71eee2859beb8f01894e183602492f5c41373ef0e23ee150159d01ae4 not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.299950 4865 scope.go:117] "RemoveContainer" containerID="fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.300181 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d"} err="failed to get container status \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": rpc error: code = NotFound desc = could not find container \"fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d\": container with ID starting with fa26c8d94eede6f6ed5e13a46ffa4f979642a1029ecd0b9fa73572eb3d73152d not found: ID does not exist" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.426450 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9" path="/var/lib/kubelet/pods/2d3c5ea1-6d46-4c2e-acca-9157f8bf14b9/volumes" Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938155 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"fc411958282db98c8792babd663092e5737402fad4548526939b988760fc9959"} Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938537 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"8f0c08ed47e5aaa44a39d6f8d13aab40a16dd0518e0ce084a47a8db5f88263f0"} Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938558 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"2f26293ccaa12c0667ef84c63271162b45753aff61aa5481a84159f28fa58aa7"} Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938571 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"74f1e93fd5de8b7fb29c22f10a9dff4e18acd6524c02c4d729c2dfa08fe041af"} Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"779ac42b0b331206d33e7df896e39bfc02f7e496320bd16cf73dd3eee1012d67"} Feb 16 22:56:52 crc kubenswrapper[4865]: I0216 22:56:52.938591 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"ce4dae24ee45a897bbc908a9449af2993efb26af8ef4a1a65e6e0fe93f5949e0"} Feb 16 22:56:54 crc kubenswrapper[4865]: I0216 22:56:54.238702 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vzmtx" Feb 16 22:56:54 crc kubenswrapper[4865]: I0216 22:56:54.315114 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:56:55 crc kubenswrapper[4865]: I0216 22:56:55.972584 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"eb4d76c671f7f03711da3844c40cabf0036309de97f4e2301d08e77a360728d5"} Feb 16 22:56:57 crc kubenswrapper[4865]: I0216 22:56:57.995322 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" event={"ID":"e5994d2f-5ff8-4aea-a269-caed13a42cdf","Type":"ContainerStarted","Data":"f0eacad7097aeab255b6f4849dcaa8737493ea8a8a62aa0fb3b467257876b61c"} Feb 16 22:56:57 crc kubenswrapper[4865]: I0216 22:56:57.996001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:58 crc kubenswrapper[4865]: I0216 22:56:58.032330 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:58 crc kubenswrapper[4865]: I0216 22:56:58.034555 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" podStartSLOduration=7.034539944 podStartE2EDuration="7.034539944s" podCreationTimestamp="2026-02-16 22:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:56:58.033418612 +0000 UTC m=+658.357125653" watchObservedRunningTime="2026-02-16 22:56:58.034539944 +0000 UTC m=+658.358246925" Feb 16 22:56:59 crc kubenswrapper[4865]: I0216 22:56:59.003230 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:59 crc kubenswrapper[4865]: I0216 22:56:59.003342 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:56:59 crc kubenswrapper[4865]: I0216 22:56:59.051060 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:57:03 crc kubenswrapper[4865]: I0216 22:57:03.415670 4865 scope.go:117] "RemoveContainer" containerID="02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2" Feb 16 22:57:03 crc kubenswrapper[4865]: E0216 22:57:03.416992 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tqmsq_openshift-multus(518e6107-6873-4bd2-86a6-e422763483ec)\"" pod="openshift-multus/multus-tqmsq" podUID="518e6107-6873-4bd2-86a6-e422763483ec" Feb 16 22:57:16 crc kubenswrapper[4865]: I0216 22:57:16.414836 4865 scope.go:117] "RemoveContainer" containerID="02963b00310cc6f9ac823cd9173971a4405f25fc58cfcf66177c33764842acd2" Feb 16 22:57:17 crc kubenswrapper[4865]: I0216 22:57:17.155025 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tqmsq_518e6107-6873-4bd2-86a6-e422763483ec/kube-multus/2.log" Feb 16 22:57:17 crc kubenswrapper[4865]: I0216 22:57:17.155529 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tqmsq" event={"ID":"518e6107-6873-4bd2-86a6-e422763483ec","Type":"ContainerStarted","Data":"fcb77ae10ec76aee60458bdc519347c76ae548ec82ea1dddd2098d88ca27e163"} Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.365357 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" podUID="0d54cdef-872b-4b15-ad66-92a5aa695143" containerName="registry" containerID="cri-o://51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332" gracePeriod=30 Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.844010 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.922951 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.923028 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.923088 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsbj7\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.923180 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.923425 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.923537 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.924418 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.924657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.924721 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.924818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets\") pod \"0d54cdef-872b-4b15-ad66-92a5aa695143\" (UID: \"0d54cdef-872b-4b15-ad66-92a5aa695143\") " Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.925872 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.925904 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d54cdef-872b-4b15-ad66-92a5aa695143-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.930529 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.933042 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.933214 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7" (OuterVolumeSpecName: "kube-api-access-jsbj7") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "kube-api-access-jsbj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.933365 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.946215 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 22:57:19 crc kubenswrapper[4865]: I0216 22:57:19.958374 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0d54cdef-872b-4b15-ad66-92a5aa695143" (UID: "0d54cdef-872b-4b15-ad66-92a5aa695143"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.026398 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsbj7\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-kube-api-access-jsbj7\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.026441 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0d54cdef-872b-4b15-ad66-92a5aa695143-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.026458 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.026476 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0d54cdef-872b-4b15-ad66-92a5aa695143-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.026490 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d54cdef-872b-4b15-ad66-92a5aa695143-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.199981 4865 generic.go:334] "Generic (PLEG): container finished" podID="0d54cdef-872b-4b15-ad66-92a5aa695143" containerID="51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332" exitCode=0 Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.200053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" event={"ID":"0d54cdef-872b-4b15-ad66-92a5aa695143","Type":"ContainerDied","Data":"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332"} Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.200102 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" event={"ID":"0d54cdef-872b-4b15-ad66-92a5aa695143","Type":"ContainerDied","Data":"b4ba6a06d515386b96ec6b7bd421e558797536539a3da759b9c6f8d3dd630b04"} Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.200136 4865 scope.go:117] "RemoveContainer" containerID="51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.200202 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-d87nj" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.228402 4865 scope.go:117] "RemoveContainer" containerID="51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332" Feb 16 22:57:20 crc kubenswrapper[4865]: E0216 22:57:20.229093 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332\": container with ID starting with 51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332 not found: ID does not exist" containerID="51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.229236 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332"} err="failed to get container status \"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332\": rpc error: code = NotFound desc = could not find container \"51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332\": container with ID starting with 51792a1cfc49d0c3e081e3c64a3c53e04b225685b7aee2566cb60a2108129332 not found: ID does not exist" Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.256302 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.266099 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-d87nj"] Feb 16 22:57:20 crc kubenswrapper[4865]: I0216 22:57:20.427623 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d54cdef-872b-4b15-ad66-92a5aa695143" path="/var/lib/kubelet/pods/0d54cdef-872b-4b15-ad66-92a5aa695143/volumes" Feb 16 22:57:21 crc kubenswrapper[4865]: I0216 22:57:21.499857 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8675j" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.287123 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d"] Feb 16 22:57:30 crc kubenswrapper[4865]: E0216 22:57:30.288437 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d54cdef-872b-4b15-ad66-92a5aa695143" containerName="registry" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.288460 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d54cdef-872b-4b15-ad66-92a5aa695143" containerName="registry" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.288581 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d54cdef-872b-4b15-ad66-92a5aa695143" containerName="registry" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.290405 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.294250 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.299402 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d"] Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.323234 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.323317 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8jh\" (UniqueName: \"kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.323397 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.424120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.424180 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.424211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8jh\" (UniqueName: \"kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.424772 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.424908 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.447324 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8jh\" (UniqueName: \"kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.614106 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:30 crc kubenswrapper[4865]: I0216 22:57:30.867991 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d"] Feb 16 22:57:30 crc kubenswrapper[4865]: W0216 22:57:30.872658 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac3cdf44_e500_4a8d_ba2d_d43a02f67bad.slice/crio-61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0 WatchSource:0}: Error finding container 61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0: Status 404 returned error can't find the container with id 61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0 Feb 16 22:57:31 crc kubenswrapper[4865]: I0216 22:57:31.288396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerStarted","Data":"deca7b5c51f603a5837ed0b86dca41ccdd86b902631c86f6448f2b84ab5520e8"} Feb 16 22:57:31 crc kubenswrapper[4865]: I0216 22:57:31.289826 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerStarted","Data":"61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0"} Feb 16 22:57:32 crc kubenswrapper[4865]: I0216 22:57:32.301200 4865 generic.go:334] "Generic (PLEG): container finished" podID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerID="deca7b5c51f603a5837ed0b86dca41ccdd86b902631c86f6448f2b84ab5520e8" exitCode=0 Feb 16 22:57:32 crc kubenswrapper[4865]: I0216 22:57:32.301346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerDied","Data":"deca7b5c51f603a5837ed0b86dca41ccdd86b902631c86f6448f2b84ab5520e8"} Feb 16 22:57:34 crc kubenswrapper[4865]: I0216 22:57:34.326760 4865 generic.go:334] "Generic (PLEG): container finished" podID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerID="f1130a2e6b7c3632a8503b10a13b18a0ce226d67433fc0ae4a9aa82a11b9405d" exitCode=0 Feb 16 22:57:34 crc kubenswrapper[4865]: I0216 22:57:34.326861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerDied","Data":"f1130a2e6b7c3632a8503b10a13b18a0ce226d67433fc0ae4a9aa82a11b9405d"} Feb 16 22:57:35 crc kubenswrapper[4865]: I0216 22:57:35.340532 4865 generic.go:334] "Generic (PLEG): container finished" podID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerID="359f95f67b6a563b854782e9d75828b082e70cde69c67e8d13c8ff566c4d2163" exitCode=0 Feb 16 22:57:35 crc kubenswrapper[4865]: I0216 22:57:35.340617 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerDied","Data":"359f95f67b6a563b854782e9d75828b082e70cde69c67e8d13c8ff566c4d2163"} Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.736398 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.937607 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle\") pod \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.937716 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q8jh\" (UniqueName: \"kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh\") pod \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.937864 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util\") pod \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\" (UID: \"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad\") " Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.939088 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle" (OuterVolumeSpecName: "bundle") pod "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" (UID: "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.951643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh" (OuterVolumeSpecName: "kube-api-access-5q8jh") pod "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" (UID: "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad"). InnerVolumeSpecName "kube-api-access-5q8jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:57:36 crc kubenswrapper[4865]: I0216 22:57:36.969761 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util" (OuterVolumeSpecName: "util") pod "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" (UID: "ac3cdf44-e500-4a8d-ba2d-d43a02f67bad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.040424 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.040485 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q8jh\" (UniqueName: \"kubernetes.io/projected/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-kube-api-access-5q8jh\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.040509 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac3cdf44-e500-4a8d-ba2d-d43a02f67bad-util\") on node \"crc\" DevicePath \"\"" Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.364017 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" event={"ID":"ac3cdf44-e500-4a8d-ba2d-d43a02f67bad","Type":"ContainerDied","Data":"61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0"} Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.364570 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a37867dd2a0828fc332075d2d6bcb5321f723ae9a473e26843d53fc52fbcc0" Feb 16 22:57:37 crc kubenswrapper[4865]: I0216 22:57:37.364142 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.890817 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hj5j9"] Feb 16 22:57:41 crc kubenswrapper[4865]: E0216 22:57:41.891418 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="pull" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.891431 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="pull" Feb 16 22:57:41 crc kubenswrapper[4865]: E0216 22:57:41.891444 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="util" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.891450 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="util" Feb 16 22:57:41 crc kubenswrapper[4865]: E0216 22:57:41.891464 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="extract" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.891471 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="extract" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.891570 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3cdf44-e500-4a8d-ba2d-d43a02f67bad" containerName="extract" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.891971 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.894639 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.894666 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wzthq" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.894753 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 22:57:41 crc kubenswrapper[4865]: I0216 22:57:41.915212 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hj5j9"] Feb 16 22:57:42 crc kubenswrapper[4865]: I0216 22:57:42.019758 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxkm\" (UniqueName: \"kubernetes.io/projected/5c652e54-0a32-41f0-844b-4f00cdb36ec3-kube-api-access-shxkm\") pod \"nmstate-operator-694c9596b7-hj5j9\" (UID: \"5c652e54-0a32-41f0-844b-4f00cdb36ec3\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" Feb 16 22:57:42 crc kubenswrapper[4865]: I0216 22:57:42.121954 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxkm\" (UniqueName: \"kubernetes.io/projected/5c652e54-0a32-41f0-844b-4f00cdb36ec3-kube-api-access-shxkm\") pod \"nmstate-operator-694c9596b7-hj5j9\" (UID: \"5c652e54-0a32-41f0-844b-4f00cdb36ec3\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" Feb 16 22:57:42 crc kubenswrapper[4865]: I0216 22:57:42.151246 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxkm\" (UniqueName: \"kubernetes.io/projected/5c652e54-0a32-41f0-844b-4f00cdb36ec3-kube-api-access-shxkm\") pod \"nmstate-operator-694c9596b7-hj5j9\" (UID: \"5c652e54-0a32-41f0-844b-4f00cdb36ec3\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" Feb 16 22:57:42 crc kubenswrapper[4865]: I0216 22:57:42.213698 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" Feb 16 22:57:42 crc kubenswrapper[4865]: I0216 22:57:42.479982 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-hj5j9"] Feb 16 22:57:43 crc kubenswrapper[4865]: I0216 22:57:43.411386 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" event={"ID":"5c652e54-0a32-41f0-844b-4f00cdb36ec3","Type":"ContainerStarted","Data":"c2d18c16f39463b35848aa57dc72801f4f367ddae5fe411f411e8f00cfaf7fdd"} Feb 16 22:57:45 crc kubenswrapper[4865]: I0216 22:57:45.433866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" event={"ID":"5c652e54-0a32-41f0-844b-4f00cdb36ec3","Type":"ContainerStarted","Data":"b28ea0a7be4920e5a03a0e92671d1746bfd347ad7c35d3036eebf45f1a0e93f4"} Feb 16 22:57:45 crc kubenswrapper[4865]: I0216 22:57:45.464376 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-hj5j9" podStartSLOduration=2.545502368 podStartE2EDuration="4.464350853s" podCreationTimestamp="2026-02-16 22:57:41 +0000 UTC" firstStartedPulling="2026-02-16 22:57:42.519556553 +0000 UTC m=+702.843263524" lastFinishedPulling="2026-02-16 22:57:44.438405048 +0000 UTC m=+704.762112009" observedRunningTime="2026-02-16 22:57:45.461836899 +0000 UTC m=+705.785543860" watchObservedRunningTime="2026-02-16 22:57:45.464350853 +0000 UTC m=+705.788057814" Feb 16 22:57:45 crc kubenswrapper[4865]: I0216 22:57:45.664824 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:57:45 crc kubenswrapper[4865]: I0216 22:57:45.664931 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.561304 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.562873 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.568434 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xf6d5" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.591244 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.594896 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.595915 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.601066 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.611374 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.637841 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pmm2s"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.638775 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.663878 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8s6h\" (UniqueName: \"kubernetes.io/projected/1b2d9b3b-4c11-4bae-9930-68b45a15ba52-kube-api-access-f8s6h\") pod \"nmstate-metrics-58c85c668d-4jnsb\" (UID: \"1b2d9b3b-4c11-4bae-9930-68b45a15ba52\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.741310 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.742020 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.744091 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wpwcf" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.744584 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.746602 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.760813 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.765969 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1ab27e8f-8d04-461e-8726-1ca46394c9b6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766061 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-ovs-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766093 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spsm9\" (UniqueName: \"kubernetes.io/projected/1ab27e8f-8d04-461e-8726-1ca46394c9b6-kube-api-access-spsm9\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766128 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-nmstate-lock\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8s6h\" (UniqueName: \"kubernetes.io/projected/1b2d9b3b-4c11-4bae-9930-68b45a15ba52-kube-api-access-f8s6h\") pod \"nmstate-metrics-58c85c668d-4jnsb\" (UID: \"1b2d9b3b-4c11-4bae-9930-68b45a15ba52\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pdf\" (UniqueName: \"kubernetes.io/projected/212784bf-c832-42e4-92c0-b1c81994982f-kube-api-access-m2pdf\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.766274 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-dbus-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.789466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8s6h\" (UniqueName: \"kubernetes.io/projected/1b2d9b3b-4c11-4bae-9930-68b45a15ba52-kube-api-access-f8s6h\") pod \"nmstate-metrics-58c85c668d-4jnsb\" (UID: \"1b2d9b3b-4c11-4bae-9930-68b45a15ba52\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-ovs-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spsm9\" (UniqueName: \"kubernetes.io/projected/1ab27e8f-8d04-461e-8726-1ca46394c9b6-kube-api-access-spsm9\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867611 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bee98b00-b363-4ff6-986b-33b5086b8453-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-nmstate-lock\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867683 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2pdf\" (UniqueName: \"kubernetes.io/projected/212784bf-c832-42e4-92c0-b1c81994982f-kube-api-access-m2pdf\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867704 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-dbus-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867720 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bee98b00-b363-4ff6-986b-33b5086b8453-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1ab27e8f-8d04-461e-8726-1ca46394c9b6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867772 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q27tw\" (UniqueName: \"kubernetes.io/projected/bee98b00-b363-4ff6-986b-33b5086b8453-kube-api-access-q27tw\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.867294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-ovs-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.868450 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-nmstate-lock\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.868593 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/212784bf-c832-42e4-92c0-b1c81994982f-dbus-socket\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.885270 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1ab27e8f-8d04-461e-8726-1ca46394c9b6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.889739 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.892814 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2pdf\" (UniqueName: \"kubernetes.io/projected/212784bf-c832-42e4-92c0-b1c81994982f-kube-api-access-m2pdf\") pod \"nmstate-handler-pmm2s\" (UID: \"212784bf-c832-42e4-92c0-b1c81994982f\") " pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.898302 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spsm9\" (UniqueName: \"kubernetes.io/projected/1ab27e8f-8d04-461e-8726-1ca46394c9b6-kube-api-access-spsm9\") pod \"nmstate-webhook-866bcb46dc-7fpp6\" (UID: \"1ab27e8f-8d04-461e-8726-1ca46394c9b6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.912487 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.948606 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bc694f886-ssgzb"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.949612 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.953605 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc694f886-ssgzb"] Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.960461 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.968711 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bee98b00-b363-4ff6-986b-33b5086b8453-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.968794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bee98b00-b363-4ff6-986b-33b5086b8453-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.968827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q27tw\" (UniqueName: \"kubernetes.io/projected/bee98b00-b363-4ff6-986b-33b5086b8453-kube-api-access-q27tw\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.972901 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bee98b00-b363-4ff6-986b-33b5086b8453-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.987173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bee98b00-b363-4ff6-986b-33b5086b8453-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:50 crc kubenswrapper[4865]: I0216 22:57:50.990761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q27tw\" (UniqueName: \"kubernetes.io/projected/bee98b00-b363-4ff6-986b-33b5086b8453-kube-api-access-q27tw\") pod \"nmstate-console-plugin-5c78fc5d65-5q9z6\" (UID: \"bee98b00-b363-4ff6-986b-33b5086b8453\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.071524 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-oauth-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072005 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-console-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-oauth-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072255 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072363 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxwp\" (UniqueName: \"kubernetes.io/projected/233ff211-4dab-4c43-a098-4e932287cf15-kube-api-access-bwxwp\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072389 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-service-ca\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.072432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-trusted-ca-bundle\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177710 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-oauth-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177791 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-console-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-oauth-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177902 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxwp\" (UniqueName: \"kubernetes.io/projected/233ff211-4dab-4c43-a098-4e932287cf15-kube-api-access-bwxwp\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177926 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-service-ca\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.177955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-trusted-ca-bundle\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.178999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-oauth-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.184950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb"] Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.186837 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-service-ca\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.187511 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-trusted-ca-bundle\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.187538 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/233ff211-4dab-4c43-a098-4e932287cf15-console-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.187839 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-serving-cert\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.187894 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/233ff211-4dab-4c43-a098-4e932287cf15-console-oauth-config\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.200681 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxwp\" (UniqueName: \"kubernetes.io/projected/233ff211-4dab-4c43-a098-4e932287cf15-kube-api-access-bwxwp\") pod \"console-6bc694f886-ssgzb\" (UID: \"233ff211-4dab-4c43-a098-4e932287cf15\") " pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.221633 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6"] Feb 16 22:57:51 crc kubenswrapper[4865]: W0216 22:57:51.227540 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ab27e8f_8d04_461e_8726_1ca46394c9b6.slice/crio-4bea03f8e4acc6e08a78197a16860216988a3a55d6aa7ffb27ac11824179cb59 WatchSource:0}: Error finding container 4bea03f8e4acc6e08a78197a16860216988a3a55d6aa7ffb27ac11824179cb59: Status 404 returned error can't find the container with id 4bea03f8e4acc6e08a78197a16860216988a3a55d6aa7ffb27ac11824179cb59 Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.297492 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6"] Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.299107 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:57:51 crc kubenswrapper[4865]: W0216 22:57:51.304818 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee98b00_b363_4ff6_986b_33b5086b8453.slice/crio-3c5b3a8bc00925f1a82d7b173ff346afe097bc86e5db7d34a23c161ee8b6822b WatchSource:0}: Error finding container 3c5b3a8bc00925f1a82d7b173ff346afe097bc86e5db7d34a23c161ee8b6822b: Status 404 returned error can't find the container with id 3c5b3a8bc00925f1a82d7b173ff346afe097bc86e5db7d34a23c161ee8b6822b Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.490134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" event={"ID":"1ab27e8f-8d04-461e-8726-1ca46394c9b6","Type":"ContainerStarted","Data":"4bea03f8e4acc6e08a78197a16860216988a3a55d6aa7ffb27ac11824179cb59"} Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.493914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" event={"ID":"1b2d9b3b-4c11-4bae-9930-68b45a15ba52","Type":"ContainerStarted","Data":"86da04f5c9e8e3c91147e6ceb6f40753a506466d8f6db3899d8bddf5583d8d64"} Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.495647 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" event={"ID":"bee98b00-b363-4ff6-986b-33b5086b8453","Type":"ContainerStarted","Data":"3c5b3a8bc00925f1a82d7b173ff346afe097bc86e5db7d34a23c161ee8b6822b"} Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.497013 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pmm2s" event={"ID":"212784bf-c832-42e4-92c0-b1c81994982f","Type":"ContainerStarted","Data":"a143b96de67ae5e981e46bbc3cb442f4896310d18a3ecbe60f81a6298e0a4f25"} Feb 16 22:57:51 crc kubenswrapper[4865]: I0216 22:57:51.533399 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc694f886-ssgzb"] Feb 16 22:57:51 crc kubenswrapper[4865]: W0216 22:57:51.538136 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod233ff211_4dab_4c43_a098_4e932287cf15.slice/crio-61cb5c28a8eaf5231638d0ca24d8f79dfdd1468540147e233121d445523d14a6 WatchSource:0}: Error finding container 61cb5c28a8eaf5231638d0ca24d8f79dfdd1468540147e233121d445523d14a6: Status 404 returned error can't find the container with id 61cb5c28a8eaf5231638d0ca24d8f79dfdd1468540147e233121d445523d14a6 Feb 16 22:57:52 crc kubenswrapper[4865]: I0216 22:57:52.505075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc694f886-ssgzb" event={"ID":"233ff211-4dab-4c43-a098-4e932287cf15","Type":"ContainerStarted","Data":"39f545b2d4e24b46babce1935eac6adf285474ae9b2f7df2fb8add5749f8d5cd"} Feb 16 22:57:52 crc kubenswrapper[4865]: I0216 22:57:52.505524 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc694f886-ssgzb" event={"ID":"233ff211-4dab-4c43-a098-4e932287cf15","Type":"ContainerStarted","Data":"61cb5c28a8eaf5231638d0ca24d8f79dfdd1468540147e233121d445523d14a6"} Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.520858 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" event={"ID":"1ab27e8f-8d04-461e-8726-1ca46394c9b6","Type":"ContainerStarted","Data":"a649396f6e2344d74c3b138bb877657dd0b3a626da9649ff2e46264669b03ce1"} Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.521704 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.524031 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" event={"ID":"1b2d9b3b-4c11-4bae-9930-68b45a15ba52","Type":"ContainerStarted","Data":"2662595faf2b762c42b35bbc72c63dcfd2c24473080bdb8088bdc34f3d2ba871"} Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.525989 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" event={"ID":"bee98b00-b363-4ff6-986b-33b5086b8453","Type":"ContainerStarted","Data":"534115b77b166e5c5f6ff853612067bf8ef41a67ed6d8aad656697cee86f4e07"} Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.552744 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bc694f886-ssgzb" podStartSLOduration=4.552705856 podStartE2EDuration="4.552705856s" podCreationTimestamp="2026-02-16 22:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:57:52.532593349 +0000 UTC m=+712.856300310" watchObservedRunningTime="2026-02-16 22:57:54.552705856 +0000 UTC m=+714.876412817" Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.556555 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" podStartSLOduration=1.5425228899999999 podStartE2EDuration="4.556518177s" podCreationTimestamp="2026-02-16 22:57:50 +0000 UTC" firstStartedPulling="2026-02-16 22:57:51.233420163 +0000 UTC m=+711.557127124" lastFinishedPulling="2026-02-16 22:57:54.24741542 +0000 UTC m=+714.571122411" observedRunningTime="2026-02-16 22:57:54.544338972 +0000 UTC m=+714.868045933" watchObservedRunningTime="2026-02-16 22:57:54.556518177 +0000 UTC m=+714.880225168" Feb 16 22:57:54 crc kubenswrapper[4865]: I0216 22:57:54.569709 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5q9z6" podStartSLOduration=1.63179951 podStartE2EDuration="4.56967122s" podCreationTimestamp="2026-02-16 22:57:50 +0000 UTC" firstStartedPulling="2026-02-16 22:57:51.309458737 +0000 UTC m=+711.633165718" lastFinishedPulling="2026-02-16 22:57:54.247330467 +0000 UTC m=+714.571037428" observedRunningTime="2026-02-16 22:57:54.564009545 +0000 UTC m=+714.887716526" watchObservedRunningTime="2026-02-16 22:57:54.56967122 +0000 UTC m=+714.893378221" Feb 16 22:57:55 crc kubenswrapper[4865]: I0216 22:57:55.550523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pmm2s" event={"ID":"212784bf-c832-42e4-92c0-b1c81994982f","Type":"ContainerStarted","Data":"dd2e98a3e9083f5969fdfa572383c58834d8abed7c531a449a0857ab479b6b27"} Feb 16 22:57:55 crc kubenswrapper[4865]: I0216 22:57:55.552170 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:57:55 crc kubenswrapper[4865]: I0216 22:57:55.578271 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pmm2s" podStartSLOduration=2.332244309 podStartE2EDuration="5.578237291s" podCreationTimestamp="2026-02-16 22:57:50 +0000 UTC" firstStartedPulling="2026-02-16 22:57:50.999958665 +0000 UTC m=+711.323665626" lastFinishedPulling="2026-02-16 22:57:54.245951617 +0000 UTC m=+714.569658608" observedRunningTime="2026-02-16 22:57:55.574019748 +0000 UTC m=+715.897726739" watchObservedRunningTime="2026-02-16 22:57:55.578237291 +0000 UTC m=+715.901944252" Feb 16 22:57:57 crc kubenswrapper[4865]: I0216 22:57:57.566928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" event={"ID":"1b2d9b3b-4c11-4bae-9930-68b45a15ba52","Type":"ContainerStarted","Data":"41018b7c738f43d294a8c3a6a99ac35c01ff467a2028e7d17cf5fe6c44513bc3"} Feb 16 22:57:57 crc kubenswrapper[4865]: I0216 22:57:57.594355 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4jnsb" podStartSLOduration=1.831860375 podStartE2EDuration="7.594328652s" podCreationTimestamp="2026-02-16 22:57:50 +0000 UTC" firstStartedPulling="2026-02-16 22:57:51.208415665 +0000 UTC m=+711.532122616" lastFinishedPulling="2026-02-16 22:57:56.970883922 +0000 UTC m=+717.294590893" observedRunningTime="2026-02-16 22:57:57.590686636 +0000 UTC m=+717.914393607" watchObservedRunningTime="2026-02-16 22:57:57.594328652 +0000 UTC m=+717.918035623" Feb 16 22:58:00 crc kubenswrapper[4865]: I0216 22:58:00.998925 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pmm2s" Feb 16 22:58:01 crc kubenswrapper[4865]: I0216 22:58:01.299617 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:58:01 crc kubenswrapper[4865]: I0216 22:58:01.299965 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:58:01 crc kubenswrapper[4865]: I0216 22:58:01.316108 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:58:01 crc kubenswrapper[4865]: I0216 22:58:01.605736 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bc694f886-ssgzb" Feb 16 22:58:01 crc kubenswrapper[4865]: I0216 22:58:01.672694 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:58:10 crc kubenswrapper[4865]: I0216 22:58:10.920328 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7fpp6" Feb 16 22:58:15 crc kubenswrapper[4865]: I0216 22:58:15.664899 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:58:15 crc kubenswrapper[4865]: I0216 22:58:15.665788 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:58:26 crc kubenswrapper[4865]: I0216 22:58:26.738212 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5m56v" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerName="console" containerID="cri-o://2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e" gracePeriod=15 Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.225429 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5m56v_566ba776-350d-4994-948d-bbbf37ae5ddc/console/0.log" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.225531 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.292113 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv"] Feb 16 22:58:27 crc kubenswrapper[4865]: E0216 22:58:27.292418 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerName="console" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.292432 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerName="console" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.292857 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerName="console" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.293742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.296510 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.305794 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv"] Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.319895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.320075 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbs9\" (UniqueName: \"kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.320181 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421565 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421593 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421622 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swq4h\" (UniqueName: \"kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421878 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.421923 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config\") pod \"566ba776-350d-4994-948d-bbbf37ae5ddc\" (UID: \"566ba776-350d-4994-948d-bbbf37ae5ddc\") " Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.422576 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config" (OuterVolumeSpecName: "console-config") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.423136 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.423422 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca" (OuterVolumeSpecName: "service-ca") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.423535 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.424132 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.424512 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.424909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.425032 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbs9\" (UniqueName: \"kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.425122 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.425140 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.425152 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.425385 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.439059 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h" (OuterVolumeSpecName: "kube-api-access-swq4h") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "kube-api-access-swq4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.440433 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.441142 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "566ba776-350d-4994-948d-bbbf37ae5ddc" (UID: "566ba776-350d-4994-948d-bbbf37ae5ddc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.458512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbs9\" (UniqueName: \"kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.527539 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swq4h\" (UniqueName: \"kubernetes.io/projected/566ba776-350d-4994-948d-bbbf37ae5ddc-kube-api-access-swq4h\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.527837 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.527993 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ba776-350d-4994-948d-bbbf37ae5ddc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.528149 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/566ba776-350d-4994-948d-bbbf37ae5ddc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.610426 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.820968 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5m56v_566ba776-350d-4994-948d-bbbf37ae5ddc/console/0.log" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.821053 4865 generic.go:334] "Generic (PLEG): container finished" podID="566ba776-350d-4994-948d-bbbf37ae5ddc" containerID="2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e" exitCode=2 Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.821113 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5m56v" event={"ID":"566ba776-350d-4994-948d-bbbf37ae5ddc","Type":"ContainerDied","Data":"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e"} Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.821161 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5m56v" event={"ID":"566ba776-350d-4994-948d-bbbf37ae5ddc","Type":"ContainerDied","Data":"99239deb099453e4a80ca1be970e47636af070cbe29f2f4d37eb93000d2d6952"} Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.821190 4865 scope.go:117] "RemoveContainer" containerID="2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.821471 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5m56v" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.864093 4865 scope.go:117] "RemoveContainer" containerID="2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e" Feb 16 22:58:27 crc kubenswrapper[4865]: E0216 22:58:27.865510 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e\": container with ID starting with 2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e not found: ID does not exist" containerID="2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.865561 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e"} err="failed to get container status \"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e\": rpc error: code = NotFound desc = could not find container \"2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e\": container with ID starting with 2cd2a109efba186098d7066442e3df952fc51c2a2033c02c734f8dd5c8bd698e not found: ID does not exist" Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.868955 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.873179 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5m56v"] Feb 16 22:58:27 crc kubenswrapper[4865]: I0216 22:58:27.925110 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv"] Feb 16 22:58:28 crc kubenswrapper[4865]: I0216 22:58:28.430796 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="566ba776-350d-4994-948d-bbbf37ae5ddc" path="/var/lib/kubelet/pods/566ba776-350d-4994-948d-bbbf37ae5ddc/volumes" Feb 16 22:58:28 crc kubenswrapper[4865]: I0216 22:58:28.832991 4865 generic.go:334] "Generic (PLEG): container finished" podID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerID="64cb90e25b841ba061dac8da838d96242272d955723d8c0937c01038af2dede3" exitCode=0 Feb 16 22:58:28 crc kubenswrapper[4865]: I0216 22:58:28.833145 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" event={"ID":"f619952d-3fa5-48e1-a477-f4cbfb893bc1","Type":"ContainerDied","Data":"64cb90e25b841ba061dac8da838d96242272d955723d8c0937c01038af2dede3"} Feb 16 22:58:28 crc kubenswrapper[4865]: I0216 22:58:28.833201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" event={"ID":"f619952d-3fa5-48e1-a477-f4cbfb893bc1","Type":"ContainerStarted","Data":"58b59c284bcc9a4e9f81874ab1887301a30235d4010792e75b3813b611550391"} Feb 16 22:58:30 crc kubenswrapper[4865]: I0216 22:58:30.855002 4865 generic.go:334] "Generic (PLEG): container finished" podID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerID="35d425749efa1fbb4381796d864dde7630a629ae8725417f9c26d6d8fec3caab" exitCode=0 Feb 16 22:58:30 crc kubenswrapper[4865]: I0216 22:58:30.855052 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" event={"ID":"f619952d-3fa5-48e1-a477-f4cbfb893bc1","Type":"ContainerDied","Data":"35d425749efa1fbb4381796d864dde7630a629ae8725417f9c26d6d8fec3caab"} Feb 16 22:58:31 crc kubenswrapper[4865]: E0216 22:58:31.216727 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf619952d_3fa5_48e1_a477_f4cbfb893bc1.slice/crio-conmon-e23372ab4cb3112625f01cbc27f177f385ddf38d7417070cabf432be2342c7a7.scope\": RecentStats: unable to find data in memory cache]" Feb 16 22:58:31 crc kubenswrapper[4865]: I0216 22:58:31.863970 4865 generic.go:334] "Generic (PLEG): container finished" podID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerID="e23372ab4cb3112625f01cbc27f177f385ddf38d7417070cabf432be2342c7a7" exitCode=0 Feb 16 22:58:31 crc kubenswrapper[4865]: I0216 22:58:31.864058 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" event={"ID":"f619952d-3fa5-48e1-a477-f4cbfb893bc1","Type":"ContainerDied","Data":"e23372ab4cb3112625f01cbc27f177f385ddf38d7417070cabf432be2342c7a7"} Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.188537 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.234614 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle\") pod \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.234801 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util\") pod \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.234924 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbs9\" (UniqueName: \"kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9\") pod \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\" (UID: \"f619952d-3fa5-48e1-a477-f4cbfb893bc1\") " Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.236823 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle" (OuterVolumeSpecName: "bundle") pod "f619952d-3fa5-48e1-a477-f4cbfb893bc1" (UID: "f619952d-3fa5-48e1-a477-f4cbfb893bc1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.242756 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9" (OuterVolumeSpecName: "kube-api-access-crbs9") pod "f619952d-3fa5-48e1-a477-f4cbfb893bc1" (UID: "f619952d-3fa5-48e1-a477-f4cbfb893bc1"). InnerVolumeSpecName "kube-api-access-crbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.274532 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util" (OuterVolumeSpecName: "util") pod "f619952d-3fa5-48e1-a477-f4cbfb893bc1" (UID: "f619952d-3fa5-48e1-a477-f4cbfb893bc1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.337795 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-util\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.337877 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbs9\" (UniqueName: \"kubernetes.io/projected/f619952d-3fa5-48e1-a477-f4cbfb893bc1-kube-api-access-crbs9\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.337903 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f619952d-3fa5-48e1-a477-f4cbfb893bc1-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.885119 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" event={"ID":"f619952d-3fa5-48e1-a477-f4cbfb893bc1","Type":"ContainerDied","Data":"58b59c284bcc9a4e9f81874ab1887301a30235d4010792e75b3813b611550391"} Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.885182 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b59c284bcc9a4e9f81874ab1887301a30235d4010792e75b3813b611550391" Feb 16 22:58:33 crc kubenswrapper[4865]: I0216 22:58:33.885226 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.635072 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:34 crc kubenswrapper[4865]: E0216 22:58:34.635562 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="extract" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.635586 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="extract" Feb 16 22:58:34 crc kubenswrapper[4865]: E0216 22:58:34.635608 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="util" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.635620 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="util" Feb 16 22:58:34 crc kubenswrapper[4865]: E0216 22:58:34.635637 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="pull" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.635650 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="pull" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.635884 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f619952d-3fa5-48e1-a477-f4cbfb893bc1" containerName="extract" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.637512 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.655671 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.658455 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.658545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpswl\" (UniqueName: \"kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.658645 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.759873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.759967 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpswl\" (UniqueName: \"kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.760060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.761020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.761140 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.793573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpswl\" (UniqueName: \"kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl\") pod \"redhat-operators-6j945\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:34 crc kubenswrapper[4865]: I0216 22:58:34.977121 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:35 crc kubenswrapper[4865]: I0216 22:58:35.427704 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:35 crc kubenswrapper[4865]: I0216 22:58:35.899654 4865 generic.go:334] "Generic (PLEG): container finished" podID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerID="2935a409e9caacee792db3089d91c1ee82d2c4d654755a1cc0c1929ab4d62611" exitCode=0 Feb 16 22:58:35 crc kubenswrapper[4865]: I0216 22:58:35.899765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerDied","Data":"2935a409e9caacee792db3089d91c1ee82d2c4d654755a1cc0c1929ab4d62611"} Feb 16 22:58:35 crc kubenswrapper[4865]: I0216 22:58:35.901107 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerStarted","Data":"ed3ef2c22c70aefc45aab08c4c401c2d74dead701abce8cb089cc34244901dc6"} Feb 16 22:58:36 crc kubenswrapper[4865]: I0216 22:58:36.909601 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerStarted","Data":"a5984ad73b6efaa6bdc55a79901600e7522addd5582992bc68716d55a15f099c"} Feb 16 22:58:37 crc kubenswrapper[4865]: I0216 22:58:37.650364 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 22:58:37 crc kubenswrapper[4865]: I0216 22:58:37.919480 4865 generic.go:334] "Generic (PLEG): container finished" podID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerID="a5984ad73b6efaa6bdc55a79901600e7522addd5582992bc68716d55a15f099c" exitCode=0 Feb 16 22:58:37 crc kubenswrapper[4865]: I0216 22:58:37.919543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerDied","Data":"a5984ad73b6efaa6bdc55a79901600e7522addd5582992bc68716d55a15f099c"} Feb 16 22:58:38 crc kubenswrapper[4865]: I0216 22:58:38.928445 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerStarted","Data":"5dc7854a287e907601aacadcfecb4f26d5b3e5fe30f020cc7e58b4b6d3c9e448"} Feb 16 22:58:38 crc kubenswrapper[4865]: I0216 22:58:38.952029 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6j945" podStartSLOduration=2.568869842 podStartE2EDuration="4.952001237s" podCreationTimestamp="2026-02-16 22:58:34 +0000 UTC" firstStartedPulling="2026-02-16 22:58:35.901610197 +0000 UTC m=+756.225317158" lastFinishedPulling="2026-02-16 22:58:38.284741592 +0000 UTC m=+758.608448553" observedRunningTime="2026-02-16 22:58:38.947937169 +0000 UTC m=+759.271644130" watchObservedRunningTime="2026-02-16 22:58:38.952001237 +0000 UTC m=+759.275708208" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.270932 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr"] Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.272820 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.277590 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.277890 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.278071 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.279985 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-phctz" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.280651 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.285930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc42l\" (UniqueName: \"kubernetes.io/projected/10795d8f-8c08-4f6d-bc5d-4446befaa125-kube-api-access-dc42l\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.285980 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-apiservice-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.286021 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-webhook-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.363840 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr"] Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.387726 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-webhook-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.387862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc42l\" (UniqueName: \"kubernetes.io/projected/10795d8f-8c08-4f6d-bc5d-4446befaa125-kube-api-access-dc42l\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.387897 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-apiservice-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.397234 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-webhook-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.408571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10795d8f-8c08-4f6d-bc5d-4446befaa125-apiservice-cert\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.429018 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc42l\" (UniqueName: \"kubernetes.io/projected/10795d8f-8c08-4f6d-bc5d-4446befaa125-kube-api-access-dc42l\") pod \"metallb-operator-controller-manager-7b7848d955-px2kr\" (UID: \"10795d8f-8c08-4f6d-bc5d-4446befaa125\") " pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.532226 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8"] Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.533402 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.536123 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mv8b7" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.536388 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.536657 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.545797 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8"] Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.591028 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-webhook-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.591090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-apiservice-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.591121 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5cm\" (UniqueName: \"kubernetes.io/projected/ca554199-8669-41a4-aac9-abe2657e896f-kube-api-access-zp5cm\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.597631 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.692314 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-apiservice-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.692775 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5cm\" (UniqueName: \"kubernetes.io/projected/ca554199-8669-41a4-aac9-abe2657e896f-kube-api-access-zp5cm\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.692878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-webhook-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.697667 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-apiservice-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.698072 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca554199-8669-41a4-aac9-abe2657e896f-webhook-cert\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.709476 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5cm\" (UniqueName: \"kubernetes.io/projected/ca554199-8669-41a4-aac9-abe2657e896f-kube-api-access-zp5cm\") pod \"metallb-operator-webhook-server-65947454b9-fm4f8\" (UID: \"ca554199-8669-41a4-aac9-abe2657e896f\") " pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.849850 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.885687 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr"] Feb 16 22:58:42 crc kubenswrapper[4865]: I0216 22:58:42.956751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" event={"ID":"10795d8f-8c08-4f6d-bc5d-4446befaa125","Type":"ContainerStarted","Data":"9037ef95654bb5ecbf97a874c0e0985aa52a0f00d7f866d24cbdb2778f4f2fa7"} Feb 16 22:58:43 crc kubenswrapper[4865]: I0216 22:58:43.120984 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8"] Feb 16 22:58:43 crc kubenswrapper[4865]: W0216 22:58:43.123529 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca554199_8669_41a4_aac9_abe2657e896f.slice/crio-abae8f74a7ce37d0874a6c93af0d65df708bb50f60aab2eb843be7ca8c165d4d WatchSource:0}: Error finding container abae8f74a7ce37d0874a6c93af0d65df708bb50f60aab2eb843be7ca8c165d4d: Status 404 returned error can't find the container with id abae8f74a7ce37d0874a6c93af0d65df708bb50f60aab2eb843be7ca8c165d4d Feb 16 22:58:43 crc kubenswrapper[4865]: I0216 22:58:43.989406 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" event={"ID":"ca554199-8669-41a4-aac9-abe2657e896f","Type":"ContainerStarted","Data":"abae8f74a7ce37d0874a6c93af0d65df708bb50f60aab2eb843be7ca8c165d4d"} Feb 16 22:58:44 crc kubenswrapper[4865]: I0216 22:58:44.980116 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:44 crc kubenswrapper[4865]: I0216 22:58:44.980546 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:45 crc kubenswrapper[4865]: I0216 22:58:45.665634 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 22:58:45 crc kubenswrapper[4865]: I0216 22:58:45.665728 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 22:58:45 crc kubenswrapper[4865]: I0216 22:58:45.665797 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 22:58:45 crc kubenswrapper[4865]: I0216 22:58:45.666632 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 22:58:45 crc kubenswrapper[4865]: I0216 22:58:45.666690 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46" gracePeriod=600 Feb 16 22:58:46 crc kubenswrapper[4865]: I0216 22:58:46.020368 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46" exitCode=0 Feb 16 22:58:46 crc kubenswrapper[4865]: I0216 22:58:46.020421 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46"} Feb 16 22:58:46 crc kubenswrapper[4865]: I0216 22:58:46.020474 4865 scope.go:117] "RemoveContainer" containerID="00c16da253b9e54ad36c0a4b2e600517a82f42ec45115535033160e46448d032" Feb 16 22:58:46 crc kubenswrapper[4865]: I0216 22:58:46.041084 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6j945" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="registry-server" probeResult="failure" output=< Feb 16 22:58:46 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 22:58:46 crc kubenswrapper[4865]: > Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.058750 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" event={"ID":"ca554199-8669-41a4-aac9-abe2657e896f","Type":"ContainerStarted","Data":"f10f64fc3ddf2aedf9fd29644e2b71b1ba99b9f953e4c6d47cd2ff5c1df7074f"} Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.059782 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.064390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" event={"ID":"10795d8f-8c08-4f6d-bc5d-4446befaa125","Type":"ContainerStarted","Data":"0585663414efdd37d42a2e7b9f03a4af3deb4ab477b81f82cbe33b30bfabeae5"} Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.064555 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.067639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3"} Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.092397 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" podStartSLOduration=2.348168616 podStartE2EDuration="8.092370135s" podCreationTimestamp="2026-02-16 22:58:42 +0000 UTC" firstStartedPulling="2026-02-16 22:58:43.128394607 +0000 UTC m=+763.452101568" lastFinishedPulling="2026-02-16 22:58:48.872596126 +0000 UTC m=+769.196303087" observedRunningTime="2026-02-16 22:58:50.088138442 +0000 UTC m=+770.411845443" watchObservedRunningTime="2026-02-16 22:58:50.092370135 +0000 UTC m=+770.416077096" Feb 16 22:58:50 crc kubenswrapper[4865]: I0216 22:58:50.142106 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" podStartSLOduration=2.182197204 podStartE2EDuration="8.142066162s" podCreationTimestamp="2026-02-16 22:58:42 +0000 UTC" firstStartedPulling="2026-02-16 22:58:42.905897149 +0000 UTC m=+763.229604110" lastFinishedPulling="2026-02-16 22:58:48.865766107 +0000 UTC m=+769.189473068" observedRunningTime="2026-02-16 22:58:50.138142927 +0000 UTC m=+770.461849938" watchObservedRunningTime="2026-02-16 22:58:50.142066162 +0000 UTC m=+770.465773163" Feb 16 22:58:55 crc kubenswrapper[4865]: I0216 22:58:55.095993 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:55 crc kubenswrapper[4865]: I0216 22:58:55.198485 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:57 crc kubenswrapper[4865]: I0216 22:58:57.809407 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:57 crc kubenswrapper[4865]: I0216 22:58:57.810058 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6j945" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="registry-server" containerID="cri-o://5dc7854a287e907601aacadcfecb4f26d5b3e5fe30f020cc7e58b4b6d3c9e448" gracePeriod=2 Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.135443 4865 generic.go:334] "Generic (PLEG): container finished" podID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerID="5dc7854a287e907601aacadcfecb4f26d5b3e5fe30f020cc7e58b4b6d3c9e448" exitCode=0 Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.135481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerDied","Data":"5dc7854a287e907601aacadcfecb4f26d5b3e5fe30f020cc7e58b4b6d3c9e448"} Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.203088 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.354553 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content\") pod \"89694e08-dabc-4ca9-8938-9a160f6754cf\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.354635 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities\") pod \"89694e08-dabc-4ca9-8938-9a160f6754cf\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.354844 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpswl\" (UniqueName: \"kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl\") pod \"89694e08-dabc-4ca9-8938-9a160f6754cf\" (UID: \"89694e08-dabc-4ca9-8938-9a160f6754cf\") " Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.355555 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities" (OuterVolumeSpecName: "utilities") pod "89694e08-dabc-4ca9-8938-9a160f6754cf" (UID: "89694e08-dabc-4ca9-8938-9a160f6754cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.363766 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl" (OuterVolumeSpecName: "kube-api-access-bpswl") pod "89694e08-dabc-4ca9-8938-9a160f6754cf" (UID: "89694e08-dabc-4ca9-8938-9a160f6754cf"). InnerVolumeSpecName "kube-api-access-bpswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.467076 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpswl\" (UniqueName: \"kubernetes.io/projected/89694e08-dabc-4ca9-8938-9a160f6754cf-kube-api-access-bpswl\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.467127 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.495484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89694e08-dabc-4ca9-8938-9a160f6754cf" (UID: "89694e08-dabc-4ca9-8938-9a160f6754cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:58:58 crc kubenswrapper[4865]: I0216 22:58:58.568424 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89694e08-dabc-4ca9-8938-9a160f6754cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.145522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6j945" event={"ID":"89694e08-dabc-4ca9-8938-9a160f6754cf","Type":"ContainerDied","Data":"ed3ef2c22c70aefc45aab08c4c401c2d74dead701abce8cb089cc34244901dc6"} Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.145910 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6j945" Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.145950 4865 scope.go:117] "RemoveContainer" containerID="5dc7854a287e907601aacadcfecb4f26d5b3e5fe30f020cc7e58b4b6d3c9e448" Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.164870 4865 scope.go:117] "RemoveContainer" containerID="a5984ad73b6efaa6bdc55a79901600e7522addd5582992bc68716d55a15f099c" Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.179363 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.187916 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6j945"] Feb 16 22:58:59 crc kubenswrapper[4865]: I0216 22:58:59.213328 4865 scope.go:117] "RemoveContainer" containerID="2935a409e9caacee792db3089d91c1ee82d2c4d654755a1cc0c1929ab4d62611" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.425913 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" path="/var/lib/kubelet/pods/89694e08-dabc-4ca9-8938-9a160f6754cf/volumes" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.442719 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:00 crc kubenswrapper[4865]: E0216 22:59:00.443297 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="extract-utilities" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.443364 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="extract-utilities" Feb 16 22:59:00 crc kubenswrapper[4865]: E0216 22:59:00.443429 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="extract-content" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.443490 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="extract-content" Feb 16 22:59:00 crc kubenswrapper[4865]: E0216 22:59:00.443555 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="registry-server" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.443604 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="registry-server" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.443803 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89694e08-dabc-4ca9-8938-9a160f6754cf" containerName="registry-server" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.444763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.469757 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.505188 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.505564 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.505678 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh84k\" (UniqueName: \"kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.606385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh84k\" (UniqueName: \"kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.606488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.606515 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.606979 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.607443 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.639708 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh84k\" (UniqueName: \"kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k\") pod \"community-operators-snd2w\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:00 crc kubenswrapper[4865]: I0216 22:59:00.779251 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:01 crc kubenswrapper[4865]: I0216 22:59:01.332192 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:02 crc kubenswrapper[4865]: I0216 22:59:02.171530 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerID="ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60" exitCode=0 Feb 16 22:59:02 crc kubenswrapper[4865]: I0216 22:59:02.171644 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerDied","Data":"ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60"} Feb 16 22:59:02 crc kubenswrapper[4865]: I0216 22:59:02.171989 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerStarted","Data":"d2258621aa77f1f3b0b7b57589996c3ee33ce39b0336b140950ac92396c2939a"} Feb 16 22:59:02 crc kubenswrapper[4865]: I0216 22:59:02.859878 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-65947454b9-fm4f8" Feb 16 22:59:03 crc kubenswrapper[4865]: I0216 22:59:03.184362 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerStarted","Data":"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796"} Feb 16 22:59:04 crc kubenswrapper[4865]: I0216 22:59:04.196651 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerID="5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796" exitCode=0 Feb 16 22:59:04 crc kubenswrapper[4865]: I0216 22:59:04.196714 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerDied","Data":"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796"} Feb 16 22:59:05 crc kubenswrapper[4865]: I0216 22:59:05.207543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerStarted","Data":"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea"} Feb 16 22:59:05 crc kubenswrapper[4865]: I0216 22:59:05.237928 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snd2w" podStartSLOduration=2.804399375 podStartE2EDuration="5.237896964s" podCreationTimestamp="2026-02-16 22:59:00 +0000 UTC" firstStartedPulling="2026-02-16 22:59:02.175569497 +0000 UTC m=+782.499276458" lastFinishedPulling="2026-02-16 22:59:04.609067086 +0000 UTC m=+784.932774047" observedRunningTime="2026-02-16 22:59:05.235123245 +0000 UTC m=+785.558830216" watchObservedRunningTime="2026-02-16 22:59:05.237896964 +0000 UTC m=+785.561603925" Feb 16 22:59:10 crc kubenswrapper[4865]: I0216 22:59:10.779940 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:10 crc kubenswrapper[4865]: I0216 22:59:10.781605 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:10 crc kubenswrapper[4865]: I0216 22:59:10.856536 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:11 crc kubenswrapper[4865]: I0216 22:59:11.311578 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.204880 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.271380 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-snd2w" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="registry-server" containerID="cri-o://b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea" gracePeriod=2 Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.716134 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.868915 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content\") pod \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.868995 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities\") pod \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.869068 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh84k\" (UniqueName: \"kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k\") pod \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\" (UID: \"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd\") " Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.870089 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities" (OuterVolumeSpecName: "utilities") pod "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" (UID: "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.875214 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k" (OuterVolumeSpecName: "kube-api-access-lh84k") pod "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" (UID: "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd"). InnerVolumeSpecName "kube-api-access-lh84k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.922228 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" (UID: "0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.970991 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.971032 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:13 crc kubenswrapper[4865]: I0216 22:59:13.971044 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh84k\" (UniqueName: \"kubernetes.io/projected/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd-kube-api-access-lh84k\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.280296 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerID="b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea" exitCode=0 Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.280346 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snd2w" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.280368 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerDied","Data":"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea"} Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.280476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snd2w" event={"ID":"0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd","Type":"ContainerDied","Data":"d2258621aa77f1f3b0b7b57589996c3ee33ce39b0336b140950ac92396c2939a"} Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.280525 4865 scope.go:117] "RemoveContainer" containerID="b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.312991 4865 scope.go:117] "RemoveContainer" containerID="5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.321891 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.326863 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-snd2w"] Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.346160 4865 scope.go:117] "RemoveContainer" containerID="ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.368381 4865 scope.go:117] "RemoveContainer" containerID="b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea" Feb 16 22:59:14 crc kubenswrapper[4865]: E0216 22:59:14.369000 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea\": container with ID starting with b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea not found: ID does not exist" containerID="b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.369038 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea"} err="failed to get container status \"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea\": rpc error: code = NotFound desc = could not find container \"b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea\": container with ID starting with b133fa35efb6a431c096abb1f6c019d99dc18904b5486567757d2b2894c37fea not found: ID does not exist" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.369068 4865 scope.go:117] "RemoveContainer" containerID="5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796" Feb 16 22:59:14 crc kubenswrapper[4865]: E0216 22:59:14.369589 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796\": container with ID starting with 5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796 not found: ID does not exist" containerID="5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.369710 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796"} err="failed to get container status \"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796\": rpc error: code = NotFound desc = could not find container \"5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796\": container with ID starting with 5858b5f3bb478d1ee9458065f37b104794242213d5aca83ce70c58a1b55b5796 not found: ID does not exist" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.369758 4865 scope.go:117] "RemoveContainer" containerID="ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60" Feb 16 22:59:14 crc kubenswrapper[4865]: E0216 22:59:14.370107 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60\": container with ID starting with ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60 not found: ID does not exist" containerID="ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.370135 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60"} err="failed to get container status \"ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60\": rpc error: code = NotFound desc = could not find container \"ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60\": container with ID starting with ba50d20a3c8a10e85b2dd2b63ea6c419c3099bd7bc1cd307a7f20c20436c1e60 not found: ID does not exist" Feb 16 22:59:14 crc kubenswrapper[4865]: I0216 22:59:14.425507 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" path="/var/lib/kubelet/pods/0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd/volumes" Feb 16 22:59:22 crc kubenswrapper[4865]: I0216 22:59:22.601784 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7b7848d955-px2kr" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.523683 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dbc9s"] Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.524088 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="extract-utilities" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.524118 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="extract-utilities" Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.524140 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="extract-content" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.524152 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="extract-content" Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.524164 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="registry-server" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.524177 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="registry-server" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.524361 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca8d5e0-7169-42fb-b6b9-8e8d5d161cbd" containerName="registry-server" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.527081 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.529838 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.530107 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.531027 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mrwzk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534845 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-sockets\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534893 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534961 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-reloader\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-startup\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.534998 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtg9x\" (UniqueName: \"kubernetes.io/projected/e1365192-a9fe-4c70-8118-7e76620b9c8c-kube-api-access-gtg9x\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.535015 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-conf\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.541325 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g"] Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.542568 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.547886 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.553581 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g"] Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642362 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-reloader\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642574 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-startup\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtg9x\" (UniqueName: \"kubernetes.io/projected/e1365192-a9fe-4c70-8118-7e76620b9c8c-kube-api-access-gtg9x\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-conf\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-sockets\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642767 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6q4j\" (UniqueName: \"kubernetes.io/projected/dde900ad-54aa-4b98-ac05-bbae1b0ce210-kube-api-access-c6q4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.642817 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.643056 4865 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.643130 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs podName:e1365192-a9fe-4c70-8118-7e76620b9c8c nodeName:}" failed. No retries permitted until 2026-02-16 22:59:24.143105884 +0000 UTC m=+804.466812845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs") pod "frr-k8s-dbc9s" (UID: "e1365192-a9fe-4c70-8118-7e76620b9c8c") : secret "frr-k8s-certs-secret" not found Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.644428 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-reloader\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.645157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-startup\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.645669 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-conf\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.645858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-frr-sockets\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.646047 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.650204 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k4twk"] Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.651258 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.660842 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.661122 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.661290 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.661648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-86cnp" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.668928 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtg9x\" (UniqueName: \"kubernetes.io/projected/e1365192-a9fe-4c70-8118-7e76620b9c8c-kube-api-access-gtg9x\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.688832 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-6xrxq"] Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.689714 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.693633 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.706730 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-6xrxq"] Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.744804 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7cec452b-64c9-41d6-ae80-458c9c316981-metallb-excludel2\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.744865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-metrics-certs\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.744911 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6q4j\" (UniqueName: \"kubernetes.io/projected/dde900ad-54aa-4b98-ac05-bbae1b0ce210-kube-api-access-c6q4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.744943 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.744973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vcz\" (UniqueName: \"kubernetes.io/projected/7cec452b-64c9-41d6-ae80-458c9c316981-kube-api-access-t2vcz\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.745046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-cert\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.745069 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.745100 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qln\" (UniqueName: \"kubernetes.io/projected/3d514685-83a5-4f3b-a89e-4490181e0109-kube-api-access-s5qln\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.745123 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-metrics-certs\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.745612 4865 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.745670 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert podName:dde900ad-54aa-4b98-ac05-bbae1b0ce210 nodeName:}" failed. No retries permitted until 2026-02-16 22:59:24.245653124 +0000 UTC m=+804.569360085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert") pod "frr-k8s-webhook-server-78b44bf5bb-q8f6g" (UID: "dde900ad-54aa-4b98-ac05-bbae1b0ce210") : secret "frr-k8s-webhook-server-cert" not found Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.781807 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6q4j\" (UniqueName: \"kubernetes.io/projected/dde900ad-54aa-4b98-ac05-bbae1b0ce210-kube-api-access-c6q4j\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845608 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-cert\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qln\" (UniqueName: \"kubernetes.io/projected/3d514685-83a5-4f3b-a89e-4490181e0109-kube-api-access-s5qln\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845710 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-metrics-certs\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845752 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7cec452b-64c9-41d6-ae80-458c9c316981-metallb-excludel2\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845786 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-metrics-certs\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.845844 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vcz\" (UniqueName: \"kubernetes.io/projected/7cec452b-64c9-41d6-ae80-458c9c316981-kube-api-access-t2vcz\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.846741 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 22:59:23 crc kubenswrapper[4865]: E0216 22:59:23.846879 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist podName:7cec452b-64c9-41d6-ae80-458c9c316981 nodeName:}" failed. No retries permitted until 2026-02-16 22:59:24.346842785 +0000 UTC m=+804.670549736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist") pod "speaker-k4twk" (UID: "7cec452b-64c9-41d6-ae80-458c9c316981") : secret "metallb-memberlist" not found Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.847063 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7cec452b-64c9-41d6-ae80-458c9c316981-metallb-excludel2\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.848560 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.851172 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-metrics-certs\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.851575 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-metrics-certs\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.860137 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d514685-83a5-4f3b-a89e-4490181e0109-cert\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.863252 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qln\" (UniqueName: \"kubernetes.io/projected/3d514685-83a5-4f3b-a89e-4490181e0109-kube-api-access-s5qln\") pod \"controller-69bbfbf88f-6xrxq\" (UID: \"3d514685-83a5-4f3b-a89e-4490181e0109\") " pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:23 crc kubenswrapper[4865]: I0216 22:59:23.867796 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vcz\" (UniqueName: \"kubernetes.io/projected/7cec452b-64c9-41d6-ae80-458c9c316981-kube-api-access-t2vcz\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.054805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.149937 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.155124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1365192-a9fe-4c70-8118-7e76620b9c8c-metrics-certs\") pod \"frr-k8s-dbc9s\" (UID: \"e1365192-a9fe-4c70-8118-7e76620b9c8c\") " pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.251290 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.266392 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde900ad-54aa-4b98-ac05-bbae1b0ce210-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8f6g\" (UID: \"dde900ad-54aa-4b98-ac05-bbae1b0ce210\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.353008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:24 crc kubenswrapper[4865]: E0216 22:59:24.353266 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 22:59:24 crc kubenswrapper[4865]: E0216 22:59:24.353387 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist podName:7cec452b-64c9-41d6-ae80-458c9c316981 nodeName:}" failed. No retries permitted until 2026-02-16 22:59:25.35336793 +0000 UTC m=+805.677074891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist") pod "speaker-k4twk" (UID: "7cec452b-64c9-41d6-ae80-458c9c316981") : secret "metallb-memberlist" not found Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.444330 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.466343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.595873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-6xrxq"] Feb 16 22:59:24 crc kubenswrapper[4865]: W0216 22:59:24.607780 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d514685_83a5_4f3b_a89e_4490181e0109.slice/crio-fa9e4cd269ae9e02e4e8d3a506d4050b1bf387a4ba35a4a91972b48690e6d635 WatchSource:0}: Error finding container fa9e4cd269ae9e02e4e8d3a506d4050b1bf387a4ba35a4a91972b48690e6d635: Status 404 returned error can't find the container with id fa9e4cd269ae9e02e4e8d3a506d4050b1bf387a4ba35a4a91972b48690e6d635 Feb 16 22:59:24 crc kubenswrapper[4865]: I0216 22:59:24.670805 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g"] Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.373455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" event={"ID":"dde900ad-54aa-4b98-ac05-bbae1b0ce210","Type":"ContainerStarted","Data":"4bd244d3d9dc55a8650c38cc6b974b32d564af51fd579ed262c76d328efe2588"} Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.375674 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"7b3aaa5e61b589648a1ab1d5d5765dcee01a270322f58e6249d9e8c0061e701b"} Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.377648 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.378426 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6xrxq" event={"ID":"3d514685-83a5-4f3b-a89e-4490181e0109","Type":"ContainerStarted","Data":"9e6c51fe6e44ac66ad0183fdf633e8701c6364403bc7c9c8a950c8169ba3ae10"} Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.378485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6xrxq" event={"ID":"3d514685-83a5-4f3b-a89e-4490181e0109","Type":"ContainerStarted","Data":"9773af7e991c27e065e8fc01ea227afa4cb307bd6f1cb30242d68c20ac5fb347"} Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.378498 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-6xrxq" event={"ID":"3d514685-83a5-4f3b-a89e-4490181e0109","Type":"ContainerStarted","Data":"fa9e4cd269ae9e02e4e8d3a506d4050b1bf387a4ba35a4a91972b48690e6d635"} Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.378657 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.387042 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7cec452b-64c9-41d6-ae80-458c9c316981-memberlist\") pod \"speaker-k4twk\" (UID: \"7cec452b-64c9-41d6-ae80-458c9c316981\") " pod="metallb-system/speaker-k4twk" Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.405877 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-6xrxq" podStartSLOduration=2.405819331 podStartE2EDuration="2.405819331s" podCreationTimestamp="2026-02-16 22:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:59:25.405165482 +0000 UTC m=+805.728872483" watchObservedRunningTime="2026-02-16 22:59:25.405819331 +0000 UTC m=+805.729526332" Feb 16 22:59:25 crc kubenswrapper[4865]: I0216 22:59:25.547767 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k4twk" Feb 16 22:59:25 crc kubenswrapper[4865]: W0216 22:59:25.571533 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cec452b_64c9_41d6_ae80_458c9c316981.slice/crio-9f56735a75b06cc0a2722cdb6271ba6f7b3d37fe088ade6de0897ec3abec5dae WatchSource:0}: Error finding container 9f56735a75b06cc0a2722cdb6271ba6f7b3d37fe088ade6de0897ec3abec5dae: Status 404 returned error can't find the container with id 9f56735a75b06cc0a2722cdb6271ba6f7b3d37fe088ade6de0897ec3abec5dae Feb 16 22:59:26 crc kubenswrapper[4865]: I0216 22:59:26.400471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4twk" event={"ID":"7cec452b-64c9-41d6-ae80-458c9c316981","Type":"ContainerStarted","Data":"0a310e9bf475b671aff6c0178e8cb6c58408e169996ed428de83ca1a4fce747f"} Feb 16 22:59:26 crc kubenswrapper[4865]: I0216 22:59:26.400824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4twk" event={"ID":"7cec452b-64c9-41d6-ae80-458c9c316981","Type":"ContainerStarted","Data":"0948e8dfd74f8d52b233527bfad9a2da3377dd4a21cc1fddaaf3d21f1640c961"} Feb 16 22:59:26 crc kubenswrapper[4865]: I0216 22:59:26.400841 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k4twk" event={"ID":"7cec452b-64c9-41d6-ae80-458c9c316981","Type":"ContainerStarted","Data":"9f56735a75b06cc0a2722cdb6271ba6f7b3d37fe088ade6de0897ec3abec5dae"} Feb 16 22:59:26 crc kubenswrapper[4865]: I0216 22:59:26.401494 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k4twk" Feb 16 22:59:26 crc kubenswrapper[4865]: I0216 22:59:26.435244 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k4twk" podStartSLOduration=3.435226256 podStartE2EDuration="3.435226256s" podCreationTimestamp="2026-02-16 22:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:59:26.432090816 +0000 UTC m=+806.755797777" watchObservedRunningTime="2026-02-16 22:59:26.435226256 +0000 UTC m=+806.758933217" Feb 16 22:59:33 crc kubenswrapper[4865]: I0216 22:59:33.492056 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" event={"ID":"dde900ad-54aa-4b98-ac05-bbae1b0ce210","Type":"ContainerStarted","Data":"ab433a275e824b439b14a028385fe26dd1e105a36a6f641b8196ccd7f326ec4d"} Feb 16 22:59:33 crc kubenswrapper[4865]: I0216 22:59:33.492916 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:33 crc kubenswrapper[4865]: I0216 22:59:33.494788 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1365192-a9fe-4c70-8118-7e76620b9c8c" containerID="1950f2be4e30cef09260a09bfc4fb4ff2a11b4631f649125b45f4661d64b0235" exitCode=0 Feb 16 22:59:33 crc kubenswrapper[4865]: I0216 22:59:33.494850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerDied","Data":"1950f2be4e30cef09260a09bfc4fb4ff2a11b4631f649125b45f4661d64b0235"} Feb 16 22:59:33 crc kubenswrapper[4865]: I0216 22:59:33.517812 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" podStartSLOduration=1.997698898 podStartE2EDuration="10.517776377s" podCreationTimestamp="2026-02-16 22:59:23 +0000 UTC" firstStartedPulling="2026-02-16 22:59:24.682710198 +0000 UTC m=+805.006417169" lastFinishedPulling="2026-02-16 22:59:33.202787647 +0000 UTC m=+813.526494648" observedRunningTime="2026-02-16 22:59:33.510814269 +0000 UTC m=+813.834521230" watchObservedRunningTime="2026-02-16 22:59:33.517776377 +0000 UTC m=+813.841483378" Feb 16 22:59:34 crc kubenswrapper[4865]: I0216 22:59:34.061379 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-6xrxq" Feb 16 22:59:34 crc kubenswrapper[4865]: I0216 22:59:34.504672 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1365192-a9fe-4c70-8118-7e76620b9c8c" containerID="03ed433a1f8826df783e543ed96680a835f313a6fd3fd0ef550046f808edacf0" exitCode=0 Feb 16 22:59:34 crc kubenswrapper[4865]: I0216 22:59:34.504800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerDied","Data":"03ed433a1f8826df783e543ed96680a835f313a6fd3fd0ef550046f808edacf0"} Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.225213 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.227641 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.239827 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.250106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.250203 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.250350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhrp\" (UniqueName: \"kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.351493 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhrp\" (UniqueName: \"kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.351602 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.351635 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.352179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.352231 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.381831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhrp\" (UniqueName: \"kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp\") pod \"redhat-marketplace-vtpmx\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.519919 4865 generic.go:334] "Generic (PLEG): container finished" podID="e1365192-a9fe-4c70-8118-7e76620b9c8c" containerID="41a0c65f5a38ee0cfb57d7da045b526d4d47c4bd648d7eccbb5138342591eb14" exitCode=0 Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.520000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerDied","Data":"41a0c65f5a38ee0cfb57d7da045b526d4d47c4bd648d7eccbb5138342591eb14"} Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.559719 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:35 crc kubenswrapper[4865]: I0216 22:59:35.585214 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k4twk" Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.067740 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:36 crc kubenswrapper[4865]: W0216 22:59:36.075244 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod027af059_294b_4cd4_bf8c_df5c660fa72c.slice/crio-f5c65676d0c944854cd3fc4d1ce3d22d3fb3dab5f3c87ca2367e6e74225d5e49 WatchSource:0}: Error finding container f5c65676d0c944854cd3fc4d1ce3d22d3fb3dab5f3c87ca2367e6e74225d5e49: Status 404 returned error can't find the container with id f5c65676d0c944854cd3fc4d1ce3d22d3fb3dab5f3c87ca2367e6e74225d5e49 Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.533868 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"679345ef3e609a27fa8c6e9a8dd95c12f6aa12029f27a55e5c9793aa277ae47d"} Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.533928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"3f8e50fb36e1e2cc168204accb9d58630353bcec1529200ca418ac855d829028"} Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.533941 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"ba81f656f74cd6222038feca177e14da446e234cdbd0cfb0fea75eb123686a75"} Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.533954 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"f1014377b257dff377da1de4f3b6670483de823baedcb97b3d78bf0dd91c9118"} Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.536700 4865 generic.go:334] "Generic (PLEG): container finished" podID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerID="56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023" exitCode=0 Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.536740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerDied","Data":"56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023"} Feb 16 22:59:36 crc kubenswrapper[4865]: I0216 22:59:36.537475 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerStarted","Data":"f5c65676d0c944854cd3fc4d1ce3d22d3fb3dab5f3c87ca2367e6e74225d5e49"} Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.553620 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"aaa29dfcaa4a105f454e710fb453da15b7aed9a6145d35038ad5bd30928baac3"} Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.553990 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dbc9s" event={"ID":"e1365192-a9fe-4c70-8118-7e76620b9c8c","Type":"ContainerStarted","Data":"7957d8952400dd474316cc784aac132114d6d2ade83926d55e506ca8bd58abc6"} Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.554053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.558319 4865 generic.go:334] "Generic (PLEG): container finished" podID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerID="12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e" exitCode=0 Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.558384 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerDied","Data":"12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e"} Feb 16 22:59:37 crc kubenswrapper[4865]: I0216 22:59:37.590988 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dbc9s" podStartSLOduration=5.947607191 podStartE2EDuration="14.59095679s" podCreationTimestamp="2026-02-16 22:59:23 +0000 UTC" firstStartedPulling="2026-02-16 22:59:24.570969016 +0000 UTC m=+804.894676017" lastFinishedPulling="2026-02-16 22:59:33.214318625 +0000 UTC m=+813.538025616" observedRunningTime="2026-02-16 22:59:37.5811426 +0000 UTC m=+817.904849561" watchObservedRunningTime="2026-02-16 22:59:37.59095679 +0000 UTC m=+817.914663791" Feb 16 22:59:38 crc kubenswrapper[4865]: I0216 22:59:38.568757 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerStarted","Data":"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4"} Feb 16 22:59:38 crc kubenswrapper[4865]: I0216 22:59:38.595593 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vtpmx" podStartSLOduration=2.203596338 podStartE2EDuration="3.595570828s" podCreationTimestamp="2026-02-16 22:59:35 +0000 UTC" firstStartedPulling="2026-02-16 22:59:36.539208719 +0000 UTC m=+816.862915700" lastFinishedPulling="2026-02-16 22:59:37.931183209 +0000 UTC m=+818.254890190" observedRunningTime="2026-02-16 22:59:38.595344471 +0000 UTC m=+818.919051442" watchObservedRunningTime="2026-02-16 22:59:38.595570828 +0000 UTC m=+818.919277789" Feb 16 22:59:39 crc kubenswrapper[4865]: I0216 22:59:39.444805 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:39 crc kubenswrapper[4865]: I0216 22:59:39.491897 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dbc9s" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.221840 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.223934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.231401 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vlmjs" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.231518 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.243418 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.263583 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.299332 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2sh\" (UniqueName: \"kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh\") pod \"openstack-operator-index-6s9sv\" (UID: \"cd18ca76-886f-4f0d-af86-56b1fa64b897\") " pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.399967 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2sh\" (UniqueName: \"kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh\") pod \"openstack-operator-index-6s9sv\" (UID: \"cd18ca76-886f-4f0d-af86-56b1fa64b897\") " pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.427488 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2sh\" (UniqueName: \"kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh\") pod \"openstack-operator-index-6s9sv\" (UID: \"cd18ca76-886f-4f0d-af86-56b1fa64b897\") " pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.564130 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:43 crc kubenswrapper[4865]: I0216 22:59:43.916038 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:44 crc kubenswrapper[4865]: I0216 22:59:44.475033 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8f6g" Feb 16 22:59:44 crc kubenswrapper[4865]: I0216 22:59:44.629647 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6s9sv" event={"ID":"cd18ca76-886f-4f0d-af86-56b1fa64b897","Type":"ContainerStarted","Data":"e15728e745005b010764800854f86ef5225077cb38974b2e0167f30b87250158"} Feb 16 22:59:45 crc kubenswrapper[4865]: I0216 22:59:45.569527 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:45 crc kubenswrapper[4865]: I0216 22:59:45.570303 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:45 crc kubenswrapper[4865]: I0216 22:59:45.631783 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:45 crc kubenswrapper[4865]: I0216 22:59:45.706001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:47 crc kubenswrapper[4865]: I0216 22:59:47.667777 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6s9sv" event={"ID":"cd18ca76-886f-4f0d-af86-56b1fa64b897","Type":"ContainerStarted","Data":"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179"} Feb 16 22:59:47 crc kubenswrapper[4865]: I0216 22:59:47.702264 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6s9sv" podStartSLOduration=1.7042072560000001 podStartE2EDuration="4.702225841s" podCreationTimestamp="2026-02-16 22:59:43 +0000 UTC" firstStartedPulling="2026-02-16 22:59:43.925611503 +0000 UTC m=+824.249318464" lastFinishedPulling="2026-02-16 22:59:46.923630048 +0000 UTC m=+827.247337049" observedRunningTime="2026-02-16 22:59:47.690878448 +0000 UTC m=+828.014585459" watchObservedRunningTime="2026-02-16 22:59:47.702225841 +0000 UTC m=+828.025932852" Feb 16 22:59:49 crc kubenswrapper[4865]: I0216 22:59:49.603482 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:49 crc kubenswrapper[4865]: I0216 22:59:49.686004 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6s9sv" podUID="cd18ca76-886f-4f0d-af86-56b1fa64b897" containerName="registry-server" containerID="cri-o://6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179" gracePeriod=2 Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.122612 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.251000 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f2sh\" (UniqueName: \"kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh\") pod \"cd18ca76-886f-4f0d-af86-56b1fa64b897\" (UID: \"cd18ca76-886f-4f0d-af86-56b1fa64b897\") " Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.261643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh" (OuterVolumeSpecName: "kube-api-access-4f2sh") pod "cd18ca76-886f-4f0d-af86-56b1fa64b897" (UID: "cd18ca76-886f-4f0d-af86-56b1fa64b897"). InnerVolumeSpecName "kube-api-access-4f2sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.353133 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f2sh\" (UniqueName: \"kubernetes.io/projected/cd18ca76-886f-4f0d-af86-56b1fa64b897-kube-api-access-4f2sh\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.411949 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ndhqn"] Feb 16 22:59:50 crc kubenswrapper[4865]: E0216 22:59:50.412400 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd18ca76-886f-4f0d-af86-56b1fa64b897" containerName="registry-server" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.412423 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd18ca76-886f-4f0d-af86-56b1fa64b897" containerName="registry-server" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.412631 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd18ca76-886f-4f0d-af86-56b1fa64b897" containerName="registry-server" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.413264 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.457064 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ndhqn"] Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.561038 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wgm\" (UniqueName: \"kubernetes.io/projected/4adfbea3-c2d3-45a2-8858-8a1f867ebf5b-kube-api-access-n6wgm\") pod \"openstack-operator-index-ndhqn\" (UID: \"4adfbea3-c2d3-45a2-8858-8a1f867ebf5b\") " pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.663241 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6wgm\" (UniqueName: \"kubernetes.io/projected/4adfbea3-c2d3-45a2-8858-8a1f867ebf5b-kube-api-access-n6wgm\") pod \"openstack-operator-index-ndhqn\" (UID: \"4adfbea3-c2d3-45a2-8858-8a1f867ebf5b\") " pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.696767 4865 generic.go:334] "Generic (PLEG): container finished" podID="cd18ca76-886f-4f0d-af86-56b1fa64b897" containerID="6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179" exitCode=0 Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.696885 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6s9sv" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.696865 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6s9sv" event={"ID":"cd18ca76-886f-4f0d-af86-56b1fa64b897","Type":"ContainerDied","Data":"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179"} Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.697012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6s9sv" event={"ID":"cd18ca76-886f-4f0d-af86-56b1fa64b897","Type":"ContainerDied","Data":"e15728e745005b010764800854f86ef5225077cb38974b2e0167f30b87250158"} Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.697061 4865 scope.go:117] "RemoveContainer" containerID="6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.700782 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6wgm\" (UniqueName: \"kubernetes.io/projected/4adfbea3-c2d3-45a2-8858-8a1f867ebf5b-kube-api-access-n6wgm\") pod \"openstack-operator-index-ndhqn\" (UID: \"4adfbea3-c2d3-45a2-8858-8a1f867ebf5b\") " pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.758824 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.765339 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6s9sv"] Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.767774 4865 scope.go:117] "RemoveContainer" containerID="6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179" Feb 16 22:59:50 crc kubenswrapper[4865]: E0216 22:59:50.768788 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179\": container with ID starting with 6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179 not found: ID does not exist" containerID="6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.768828 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179"} err="failed to get container status \"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179\": rpc error: code = NotFound desc = could not find container \"6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179\": container with ID starting with 6d93c697b8c9e46712080410cd299a0b2746032de83691cf9d2d536d1dcad179 not found: ID does not exist" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.785978 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.801219 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:50 crc kubenswrapper[4865]: I0216 22:59:50.801540 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vtpmx" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="registry-server" containerID="cri-o://c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4" gracePeriod=2 Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.254331 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ndhqn"] Feb 16 22:59:51 crc kubenswrapper[4865]: W0216 22:59:51.261645 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4adfbea3_c2d3_45a2_8858_8a1f867ebf5b.slice/crio-a4ba9264c9b05d59619cf2ecf89debc33d7b0336d8828c8040ed6cb6d4961b6a WatchSource:0}: Error finding container a4ba9264c9b05d59619cf2ecf89debc33d7b0336d8828c8040ed6cb6d4961b6a: Status 404 returned error can't find the container with id a4ba9264c9b05d59619cf2ecf89debc33d7b0336d8828c8040ed6cb6d4961b6a Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.289085 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.372768 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities\") pod \"027af059-294b-4cd4-bf8c-df5c660fa72c\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.373035 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhrp\" (UniqueName: \"kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp\") pod \"027af059-294b-4cd4-bf8c-df5c660fa72c\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.373114 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content\") pod \"027af059-294b-4cd4-bf8c-df5c660fa72c\" (UID: \"027af059-294b-4cd4-bf8c-df5c660fa72c\") " Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.373787 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities" (OuterVolumeSpecName: "utilities") pod "027af059-294b-4cd4-bf8c-df5c660fa72c" (UID: "027af059-294b-4cd4-bf8c-df5c660fa72c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.381745 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp" (OuterVolumeSpecName: "kube-api-access-mqhrp") pod "027af059-294b-4cd4-bf8c-df5c660fa72c" (UID: "027af059-294b-4cd4-bf8c-df5c660fa72c"). InnerVolumeSpecName "kube-api-access-mqhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.397299 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "027af059-294b-4cd4-bf8c-df5c660fa72c" (UID: "027af059-294b-4cd4-bf8c-df5c660fa72c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.476082 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.476143 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhrp\" (UniqueName: \"kubernetes.io/projected/027af059-294b-4cd4-bf8c-df5c660fa72c-kube-api-access-mqhrp\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.476168 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027af059-294b-4cd4-bf8c-df5c660fa72c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.705607 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ndhqn" event={"ID":"4adfbea3-c2d3-45a2-8858-8a1f867ebf5b","Type":"ContainerStarted","Data":"ff44cc75dc1c5aa04bf5c0940ebda5e7e577e6462f46b6527b7d45789df25538"} Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.707643 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ndhqn" event={"ID":"4adfbea3-c2d3-45a2-8858-8a1f867ebf5b","Type":"ContainerStarted","Data":"a4ba9264c9b05d59619cf2ecf89debc33d7b0336d8828c8040ed6cb6d4961b6a"} Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.708330 4865 generic.go:334] "Generic (PLEG): container finished" podID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerID="c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4" exitCode=0 Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.708463 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vtpmx" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.708376 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerDied","Data":"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4"} Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.708615 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vtpmx" event={"ID":"027af059-294b-4cd4-bf8c-df5c660fa72c","Type":"ContainerDied","Data":"f5c65676d0c944854cd3fc4d1ce3d22d3fb3dab5f3c87ca2367e6e74225d5e49"} Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.708639 4865 scope.go:117] "RemoveContainer" containerID="c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.738360 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ndhqn" podStartSLOduration=1.6745511130000001 podStartE2EDuration="1.738341319s" podCreationTimestamp="2026-02-16 22:59:50 +0000 UTC" firstStartedPulling="2026-02-16 22:59:51.266675347 +0000 UTC m=+831.590382308" lastFinishedPulling="2026-02-16 22:59:51.330465543 +0000 UTC m=+831.654172514" observedRunningTime="2026-02-16 22:59:51.736840966 +0000 UTC m=+832.060547937" watchObservedRunningTime="2026-02-16 22:59:51.738341319 +0000 UTC m=+832.062048270" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.743051 4865 scope.go:117] "RemoveContainer" containerID="12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.754493 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.760387 4865 scope.go:117] "RemoveContainer" containerID="56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.766675 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vtpmx"] Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.777569 4865 scope.go:117] "RemoveContainer" containerID="c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4" Feb 16 22:59:51 crc kubenswrapper[4865]: E0216 22:59:51.778179 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4\": container with ID starting with c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4 not found: ID does not exist" containerID="c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.778227 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4"} err="failed to get container status \"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4\": rpc error: code = NotFound desc = could not find container \"c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4\": container with ID starting with c641702b6f5e08849870f9d23d6eb1087f01bd061e248a625e2b0d75dd3865a4 not found: ID does not exist" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.778259 4865 scope.go:117] "RemoveContainer" containerID="12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e" Feb 16 22:59:51 crc kubenswrapper[4865]: E0216 22:59:51.778665 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e\": container with ID starting with 12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e not found: ID does not exist" containerID="12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.778778 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e"} err="failed to get container status \"12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e\": rpc error: code = NotFound desc = could not find container \"12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e\": container with ID starting with 12dea7fea8ba690cd4b4368cb429c15cb596d411068bbee3a4a58b6c7e99179e not found: ID does not exist" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.778874 4865 scope.go:117] "RemoveContainer" containerID="56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023" Feb 16 22:59:51 crc kubenswrapper[4865]: E0216 22:59:51.779301 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023\": container with ID starting with 56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023 not found: ID does not exist" containerID="56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023" Feb 16 22:59:51 crc kubenswrapper[4865]: I0216 22:59:51.779327 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023"} err="failed to get container status \"56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023\": rpc error: code = NotFound desc = could not find container \"56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023\": container with ID starting with 56a2d601e8e8579eccd6187bf6c307943004f1da2dd9cd360778825668d8c023 not found: ID does not exist" Feb 16 22:59:52 crc kubenswrapper[4865]: I0216 22:59:52.431803 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" path="/var/lib/kubelet/pods/027af059-294b-4cd4-bf8c-df5c660fa72c/volumes" Feb 16 22:59:52 crc kubenswrapper[4865]: I0216 22:59:52.433590 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd18ca76-886f-4f0d-af86-56b1fa64b897" path="/var/lib/kubelet/pods/cd18ca76-886f-4f0d-af86-56b1fa64b897/volumes" Feb 16 22:59:54 crc kubenswrapper[4865]: I0216 22:59:54.449097 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dbc9s" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.169365 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx"] Feb 16 23:00:00 crc kubenswrapper[4865]: E0216 23:00:00.170151 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.170171 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4865]: E0216 23:00:00.170186 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="extract-content" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.170197 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="extract-content" Feb 16 23:00:00 crc kubenswrapper[4865]: E0216 23:00:00.170223 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="extract-utilities" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.170232 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="extract-utilities" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.170420 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="027af059-294b-4cd4-bf8c-df5c660fa72c" containerName="registry-server" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.171020 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.173329 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.174561 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.189230 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx"] Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.232958 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.233091 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.233208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj64f\" (UniqueName: \"kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.335289 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj64f\" (UniqueName: \"kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.336101 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.336196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.338086 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.353635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.365883 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj64f\" (UniqueName: \"kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f\") pod \"collect-profiles-29521380-jlcrx\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.505337 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.512787 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.786961 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.787506 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.822679 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.852088 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ndhqn" Feb 16 23:00:00 crc kubenswrapper[4865]: I0216 23:00:00.998866 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx"] Feb 16 23:00:01 crc kubenswrapper[4865]: I0216 23:00:01.834934 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdc821bc-0cee-45eb-a017-044f3fe176d1" containerID="eb0473048b046bcba6e2f1f7806ac711bfe726e17e7488ef5af3021ec9086241" exitCode=0 Feb 16 23:00:01 crc kubenswrapper[4865]: I0216 23:00:01.835015 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" event={"ID":"bdc821bc-0cee-45eb-a017-044f3fe176d1","Type":"ContainerDied","Data":"eb0473048b046bcba6e2f1f7806ac711bfe726e17e7488ef5af3021ec9086241"} Feb 16 23:00:01 crc kubenswrapper[4865]: I0216 23:00:01.835671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" event={"ID":"bdc821bc-0cee-45eb-a017-044f3fe176d1","Type":"ContainerStarted","Data":"7e64d5f97c9580d168d8b210801a8ad0c62ab5d6a670fd9c930f4c5ee13eb943"} Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.197816 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.285247 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj64f\" (UniqueName: \"kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f\") pod \"bdc821bc-0cee-45eb-a017-044f3fe176d1\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.285871 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume\") pod \"bdc821bc-0cee-45eb-a017-044f3fe176d1\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.286064 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume\") pod \"bdc821bc-0cee-45eb-a017-044f3fe176d1\" (UID: \"bdc821bc-0cee-45eb-a017-044f3fe176d1\") " Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.286989 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "bdc821bc-0cee-45eb-a017-044f3fe176d1" (UID: "bdc821bc-0cee-45eb-a017-044f3fe176d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.294163 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bdc821bc-0cee-45eb-a017-044f3fe176d1" (UID: "bdc821bc-0cee-45eb-a017-044f3fe176d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.294469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f" (OuterVolumeSpecName: "kube-api-access-bj64f") pod "bdc821bc-0cee-45eb-a017-044f3fe176d1" (UID: "bdc821bc-0cee-45eb-a017-044f3fe176d1"). InnerVolumeSpecName "kube-api-access-bj64f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.388400 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc821bc-0cee-45eb-a017-044f3fe176d1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.388439 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj64f\" (UniqueName: \"kubernetes.io/projected/bdc821bc-0cee-45eb-a017-044f3fe176d1-kube-api-access-bj64f\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.388458 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc821bc-0cee-45eb-a017-044f3fe176d1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.854489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" event={"ID":"bdc821bc-0cee-45eb-a017-044f3fe176d1","Type":"ContainerDied","Data":"7e64d5f97c9580d168d8b210801a8ad0c62ab5d6a670fd9c930f4c5ee13eb943"} Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.854561 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e64d5f97c9580d168d8b210801a8ad0c62ab5d6a670fd9c930f4c5ee13eb943" Feb 16 23:00:03 crc kubenswrapper[4865]: I0216 23:00:03.854560 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.707346 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg"] Feb 16 23:00:15 crc kubenswrapper[4865]: E0216 23:00:15.708256 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc821bc-0cee-45eb-a017-044f3fe176d1" containerName="collect-profiles" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.708292 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc821bc-0cee-45eb-a017-044f3fe176d1" containerName="collect-profiles" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.708431 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc821bc-0cee-45eb-a017-044f3fe176d1" containerName="collect-profiles" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.709694 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.712392 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-42s8w" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.722629 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg"] Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.804137 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdsx\" (UniqueName: \"kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.804238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.804290 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.905500 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdsx\" (UniqueName: \"kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.905628 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.905675 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.907338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.908403 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:15 crc kubenswrapper[4865]: I0216 23:00:15.928048 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdsx\" (UniqueName: \"kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:16 crc kubenswrapper[4865]: I0216 23:00:16.033633 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:16 crc kubenswrapper[4865]: I0216 23:00:16.324690 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg"] Feb 16 23:00:16 crc kubenswrapper[4865]: I0216 23:00:16.960150 4865 generic.go:334] "Generic (PLEG): container finished" podID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerID="5cbf44374cc7ec41611f963ec41a83d546fe576b7b47ad347637609518494282" exitCode=0 Feb 16 23:00:16 crc kubenswrapper[4865]: I0216 23:00:16.960224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerDied","Data":"5cbf44374cc7ec41611f963ec41a83d546fe576b7b47ad347637609518494282"} Feb 16 23:00:16 crc kubenswrapper[4865]: I0216 23:00:16.960315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerStarted","Data":"29293ec0f0942543829898f3ab8424ffcef3ede0a176e69d6f364fb64f3079ec"} Feb 16 23:00:17 crc kubenswrapper[4865]: I0216 23:00:17.972261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerStarted","Data":"768a67bca788fcf3a9b3c651328f79a90e6d9c4747ccd2e6a009f463509d6d7a"} Feb 16 23:00:18 crc kubenswrapper[4865]: I0216 23:00:18.983906 4865 generic.go:334] "Generic (PLEG): container finished" podID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerID="768a67bca788fcf3a9b3c651328f79a90e6d9c4747ccd2e6a009f463509d6d7a" exitCode=0 Feb 16 23:00:18 crc kubenswrapper[4865]: I0216 23:00:18.983993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerDied","Data":"768a67bca788fcf3a9b3c651328f79a90e6d9c4747ccd2e6a009f463509d6d7a"} Feb 16 23:00:19 crc kubenswrapper[4865]: I0216 23:00:19.996734 4865 generic.go:334] "Generic (PLEG): container finished" podID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerID="cac9010bdf46eedc2e734ecefa8e8af63551b23286daab92bd7779ccf99f84d5" exitCode=0 Feb 16 23:00:19 crc kubenswrapper[4865]: I0216 23:00:19.996800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerDied","Data":"cac9010bdf46eedc2e734ecefa8e8af63551b23286daab92bd7779ccf99f84d5"} Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.328751 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.420210 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util\") pod \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.420412 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmdsx\" (UniqueName: \"kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx\") pod \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.421512 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle\") pod \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\" (UID: \"7cf141a0-2b74-4eb1-99e2-80774839ccd6\") " Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.422133 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle" (OuterVolumeSpecName: "bundle") pod "7cf141a0-2b74-4eb1-99e2-80774839ccd6" (UID: "7cf141a0-2b74-4eb1-99e2-80774839ccd6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.433671 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx" (OuterVolumeSpecName: "kube-api-access-vmdsx") pod "7cf141a0-2b74-4eb1-99e2-80774839ccd6" (UID: "7cf141a0-2b74-4eb1-99e2-80774839ccd6"). InnerVolumeSpecName "kube-api-access-vmdsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.455002 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util" (OuterVolumeSpecName: "util") pod "7cf141a0-2b74-4eb1-99e2-80774839ccd6" (UID: "7cf141a0-2b74-4eb1-99e2-80774839ccd6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.524254 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.524323 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7cf141a0-2b74-4eb1-99e2-80774839ccd6-util\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:21 crc kubenswrapper[4865]: I0216 23:00:21.524335 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmdsx\" (UniqueName: \"kubernetes.io/projected/7cf141a0-2b74-4eb1-99e2-80774839ccd6-kube-api-access-vmdsx\") on node \"crc\" DevicePath \"\"" Feb 16 23:00:22 crc kubenswrapper[4865]: I0216 23:00:22.023134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" event={"ID":"7cf141a0-2b74-4eb1-99e2-80774839ccd6","Type":"ContainerDied","Data":"29293ec0f0942543829898f3ab8424ffcef3ede0a176e69d6f364fb64f3079ec"} Feb 16 23:00:22 crc kubenswrapper[4865]: I0216 23:00:22.023202 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29293ec0f0942543829898f3ab8424ffcef3ede0a176e69d6f364fb64f3079ec" Feb 16 23:00:22 crc kubenswrapper[4865]: I0216 23:00:22.023243 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.097558 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk"] Feb 16 23:00:27 crc kubenswrapper[4865]: E0216 23:00:27.098709 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="pull" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.098725 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="pull" Feb 16 23:00:27 crc kubenswrapper[4865]: E0216 23:00:27.098749 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="util" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.098759 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="util" Feb 16 23:00:27 crc kubenswrapper[4865]: E0216 23:00:27.098774 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="extract" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.098783 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="extract" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.098935 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf141a0-2b74-4eb1-99e2-80774839ccd6" containerName="extract" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.099459 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.103386 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7tcr9" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.125831 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk"] Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.224607 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dn6b\" (UniqueName: \"kubernetes.io/projected/5c43d211-62d0-403c-90d5-00c0bfcfa692-kube-api-access-6dn6b\") pod \"openstack-operator-controller-init-7d7c89f976-vxpzk\" (UID: \"5c43d211-62d0-403c-90d5-00c0bfcfa692\") " pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.326545 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dn6b\" (UniqueName: \"kubernetes.io/projected/5c43d211-62d0-403c-90d5-00c0bfcfa692-kube-api-access-6dn6b\") pod \"openstack-operator-controller-init-7d7c89f976-vxpzk\" (UID: \"5c43d211-62d0-403c-90d5-00c0bfcfa692\") " pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.353296 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dn6b\" (UniqueName: \"kubernetes.io/projected/5c43d211-62d0-403c-90d5-00c0bfcfa692-kube-api-access-6dn6b\") pod \"openstack-operator-controller-init-7d7c89f976-vxpzk\" (UID: \"5c43d211-62d0-403c-90d5-00c0bfcfa692\") " pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.418784 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:27 crc kubenswrapper[4865]: I0216 23:00:27.732547 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk"] Feb 16 23:00:28 crc kubenswrapper[4865]: I0216 23:00:28.075557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" event={"ID":"5c43d211-62d0-403c-90d5-00c0bfcfa692","Type":"ContainerStarted","Data":"c56e621a56e9837a32b9c4bd96a08c120eed41d1802549f6a8a57cd186811d91"} Feb 16 23:00:33 crc kubenswrapper[4865]: I0216 23:00:33.125721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" event={"ID":"5c43d211-62d0-403c-90d5-00c0bfcfa692","Type":"ContainerStarted","Data":"8e260b8a9f78fdcb80f4b738a8f3b333f840b220e78ba73d127542d9faf59b2f"} Feb 16 23:00:33 crc kubenswrapper[4865]: I0216 23:00:33.126849 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:00:33 crc kubenswrapper[4865]: I0216 23:00:33.175910 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" podStartSLOduration=1.73829948 podStartE2EDuration="6.175882732s" podCreationTimestamp="2026-02-16 23:00:27 +0000 UTC" firstStartedPulling="2026-02-16 23:00:27.748577156 +0000 UTC m=+868.072284117" lastFinishedPulling="2026-02-16 23:00:32.186160408 +0000 UTC m=+872.509867369" observedRunningTime="2026-02-16 23:00:33.171645872 +0000 UTC m=+873.495352873" watchObservedRunningTime="2026-02-16 23:00:33.175882732 +0000 UTC m=+873.499589723" Feb 16 23:00:37 crc kubenswrapper[4865]: I0216 23:00:37.424393 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7d7c89f976-vxpzk" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.833563 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.835124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.840200 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kjcgh" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.851143 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.859191 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.860815 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.863964 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-95ccc" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.875219 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.895490 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.896822 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.899766 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.900322 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-httbz" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.901239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvmn\" (UniqueName: \"kubernetes.io/projected/a5025501-39c8-43ae-8b94-3a555517b1f7-kube-api-access-5xvmn\") pod \"barbican-operator-controller-manager-868647ff47-7nt98\" (UID: \"a5025501-39c8-43ae-8b94-3a555517b1f7\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.927708 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.928664 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.931011 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tjbt4" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.952859 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.959967 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.962381 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9"] Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.963784 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xjf6w" Feb 16 23:01:01 crc kubenswrapper[4865]: I0216 23:01:01.998444 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.002938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8zn\" (UniqueName: \"kubernetes.io/projected/829ee3ed-5827-46ee-8399-f0b82ffa4d1d-kube-api-access-vr8zn\") pod \"designate-operator-controller-manager-6d8bf5c495-5ntcr\" (UID: \"829ee3ed-5827-46ee-8399-f0b82ffa4d1d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.003008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvmn\" (UniqueName: \"kubernetes.io/projected/a5025501-39c8-43ae-8b94-3a555517b1f7-kube-api-access-5xvmn\") pod \"barbican-operator-controller-manager-868647ff47-7nt98\" (UID: \"a5025501-39c8-43ae-8b94-3a555517b1f7\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.003045 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bmp\" (UniqueName: \"kubernetes.io/projected/3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec-kube-api-access-c4bmp\") pod \"cinder-operator-controller-manager-5d946d989d-msxb8\" (UID: \"3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.003114 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vgs\" (UniqueName: \"kubernetes.io/projected/61ac3013-99f7-4aef-b85d-8675044accc6-kube-api-access-j4vgs\") pod \"glance-operator-controller-manager-77987464f4-kvmr9\" (UID: \"61ac3013-99f7-4aef-b85d-8675044accc6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.006886 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.007959 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.020755 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rgm65" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.030391 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvmn\" (UniqueName: \"kubernetes.io/projected/a5025501-39c8-43ae-8b94-3a555517b1f7-kube-api-access-5xvmn\") pod \"barbican-operator-controller-manager-868647ff47-7nt98\" (UID: \"a5025501-39c8-43ae-8b94-3a555517b1f7\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.039355 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.040234 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.040257 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.040360 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.045849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5nvr2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.046180 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.064687 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.065846 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.068662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-snfvr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.072331 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.073300 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.083753 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ccggv" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.085551 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.092509 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.110828 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.110890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vgs\" (UniqueName: \"kubernetes.io/projected/61ac3013-99f7-4aef-b85d-8675044accc6-kube-api-access-j4vgs\") pod \"glance-operator-controller-manager-77987464f4-kvmr9\" (UID: \"61ac3013-99f7-4aef-b85d-8675044accc6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.110930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdb2z\" (UniqueName: \"kubernetes.io/projected/812a9f63-a231-495c-9474-0c60929fabff-kube-api-access-wdb2z\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.110980 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbh7\" (UniqueName: \"kubernetes.io/projected/73e02b9a-66d8-4fb4-bc3d-13610563b6e4-kube-api-access-hgbh7\") pod \"horizon-operator-controller-manager-5b9b8895d5-vc4dc\" (UID: \"73e02b9a-66d8-4fb4-bc3d-13610563b6e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.111014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8zn\" (UniqueName: \"kubernetes.io/projected/829ee3ed-5827-46ee-8399-f0b82ffa4d1d-kube-api-access-vr8zn\") pod \"designate-operator-controller-manager-6d8bf5c495-5ntcr\" (UID: \"829ee3ed-5827-46ee-8399-f0b82ffa4d1d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.111056 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4bmp\" (UniqueName: \"kubernetes.io/projected/3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec-kube-api-access-c4bmp\") pod \"cinder-operator-controller-manager-5d946d989d-msxb8\" (UID: \"3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.111125 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lr7d\" (UniqueName: \"kubernetes.io/projected/f1b2a884-8e78-47ac-9c45-7861a81e02d4-kube-api-access-6lr7d\") pod \"heat-operator-controller-manager-69f49c598c-jmmxl\" (UID: \"f1b2a884-8e78-47ac-9c45-7861a81e02d4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.112372 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.115838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.118693 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8vs8r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.147729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vgs\" (UniqueName: \"kubernetes.io/projected/61ac3013-99f7-4aef-b85d-8675044accc6-kube-api-access-j4vgs\") pod \"glance-operator-controller-manager-77987464f4-kvmr9\" (UID: \"61ac3013-99f7-4aef-b85d-8675044accc6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.149837 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8zn\" (UniqueName: \"kubernetes.io/projected/829ee3ed-5827-46ee-8399-f0b82ffa4d1d-kube-api-access-vr8zn\") pod \"designate-operator-controller-manager-6d8bf5c495-5ntcr\" (UID: \"829ee3ed-5827-46ee-8399-f0b82ffa4d1d\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.149896 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.149965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4bmp\" (UniqueName: \"kubernetes.io/projected/3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec-kube-api-access-c4bmp\") pod \"cinder-operator-controller-manager-5d946d989d-msxb8\" (UID: \"3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.163155 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.164741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.172874 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8w58g" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.193164 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.216892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lr7d\" (UniqueName: \"kubernetes.io/projected/f1b2a884-8e78-47ac-9c45-7861a81e02d4-kube-api-access-6lr7d\") pod \"heat-operator-controller-manager-69f49c598c-jmmxl\" (UID: \"f1b2a884-8e78-47ac-9c45-7861a81e02d4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.216973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hbc\" (UniqueName: \"kubernetes.io/projected/dded450f-3a37-48b0-84fc-1de3c64c1954-kube-api-access-d9hbc\") pod \"manila-operator-controller-manager-54f6768c69-phdd6\" (UID: \"dded450f-3a37-48b0-84fc-1de3c64c1954\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.217019 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd49\" (UniqueName: \"kubernetes.io/projected/68b414dd-a0c6-488a-b253-1a3f477cb7a8-kube-api-access-vrd49\") pod \"keystone-operator-controller-manager-b4d948c87-stf5r\" (UID: \"68b414dd-a0c6-488a-b253-1a3f477cb7a8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.217046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.217078 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdb2z\" (UniqueName: \"kubernetes.io/projected/812a9f63-a231-495c-9474-0c60929fabff-kube-api-access-wdb2z\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.217108 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfdl\" (UniqueName: \"kubernetes.io/projected/dc7842ab-52e5-4223-8b2a-ab09641bf297-kube-api-access-jdfdl\") pod \"ironic-operator-controller-manager-554564d7fc-wl4zd\" (UID: \"dc7842ab-52e5-4223-8b2a-ab09641bf297\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.217145 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgbh7\" (UniqueName: \"kubernetes.io/projected/73e02b9a-66d8-4fb4-bc3d-13610563b6e4-kube-api-access-hgbh7\") pod \"horizon-operator-controller-manager-5b9b8895d5-vc4dc\" (UID: \"73e02b9a-66d8-4fb4-bc3d-13610563b6e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.218377 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.218455 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:02.718421198 +0000 UTC m=+903.042128159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.245447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.252702 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.253695 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lr7d\" (UniqueName: \"kubernetes.io/projected/f1b2a884-8e78-47ac-9c45-7861a81e02d4-kube-api-access-6lr7d\") pod \"heat-operator-controller-manager-69f49c598c-jmmxl\" (UID: \"f1b2a884-8e78-47ac-9c45-7861a81e02d4\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.253778 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.262900 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.264150 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.268083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdb2z\" (UniqueName: \"kubernetes.io/projected/812a9f63-a231-495c-9474-0c60929fabff-kube-api-access-wdb2z\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.270607 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgbh7\" (UniqueName: \"kubernetes.io/projected/73e02b9a-66d8-4fb4-bc3d-13610563b6e4-kube-api-access-hgbh7\") pod \"horizon-operator-controller-manager-5b9b8895d5-vc4dc\" (UID: \"73e02b9a-66d8-4fb4-bc3d-13610563b6e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.271011 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4v8wz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.271233 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.280060 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.280820 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.280937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.284681 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fclkh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.290680 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.302144 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.307942 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.309042 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.311943 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.316003 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bd5m5" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.318327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9hbc\" (UniqueName: \"kubernetes.io/projected/dded450f-3a37-48b0-84fc-1de3c64c1954-kube-api-access-d9hbc\") pod \"manila-operator-controller-manager-54f6768c69-phdd6\" (UID: \"dded450f-3a37-48b0-84fc-1de3c64c1954\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.318368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrd49\" (UniqueName: \"kubernetes.io/projected/68b414dd-a0c6-488a-b253-1a3f477cb7a8-kube-api-access-vrd49\") pod \"keystone-operator-controller-manager-b4d948c87-stf5r\" (UID: \"68b414dd-a0c6-488a-b253-1a3f477cb7a8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.318413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9tn\" (UniqueName: \"kubernetes.io/projected/a1759f72-1644-42e2-9b67-01478800870b-kube-api-access-xc9tn\") pod \"mariadb-operator-controller-manager-6994f66f48-hw4fs\" (UID: \"a1759f72-1644-42e2-9b67-01478800870b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.318443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfdl\" (UniqueName: \"kubernetes.io/projected/dc7842ab-52e5-4223-8b2a-ab09641bf297-kube-api-access-jdfdl\") pod \"ironic-operator-controller-manager-554564d7fc-wl4zd\" (UID: \"dc7842ab-52e5-4223-8b2a-ab09641bf297\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.325615 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.335390 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.336163 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.336761 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.341873 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.342758 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-plx65" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.343086 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tpxwt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.351905 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrd49\" (UniqueName: \"kubernetes.io/projected/68b414dd-a0c6-488a-b253-1a3f477cb7a8-kube-api-access-vrd49\") pod \"keystone-operator-controller-manager-b4d948c87-stf5r\" (UID: \"68b414dd-a0c6-488a-b253-1a3f477cb7a8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.354796 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9hbc\" (UniqueName: \"kubernetes.io/projected/dded450f-3a37-48b0-84fc-1de3c64c1954-kube-api-access-d9hbc\") pod \"manila-operator-controller-manager-54f6768c69-phdd6\" (UID: \"dded450f-3a37-48b0-84fc-1de3c64c1954\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.361105 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.361923 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfdl\" (UniqueName: \"kubernetes.io/projected/dc7842ab-52e5-4223-8b2a-ab09641bf297-kube-api-access-jdfdl\") pod \"ironic-operator-controller-manager-554564d7fc-wl4zd\" (UID: \"dc7842ab-52e5-4223-8b2a-ab09641bf297\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.373852 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.386714 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.387684 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.391149 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w7ln5" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.413643 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.424956 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9tn\" (UniqueName: \"kubernetes.io/projected/a1759f72-1644-42e2-9b67-01478800870b-kube-api-access-xc9tn\") pod \"mariadb-operator-controller-manager-6994f66f48-hw4fs\" (UID: \"a1759f72-1644-42e2-9b67-01478800870b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425019 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9q2l\" (UniqueName: \"kubernetes.io/projected/60e0dd0a-0055-45ec-8a4c-f0c23cd214b6-kube-api-access-d9q2l\") pod \"ovn-operator-controller-manager-d44cf6b75-946kc\" (UID: \"60e0dd0a-0055-45ec-8a4c-f0c23cd214b6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425105 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j66h6\" (UniqueName: \"kubernetes.io/projected/3be11752-93fd-4edc-b100-0bfd29f599e8-kube-api-access-j66h6\") pod \"nova-operator-controller-manager-567668f5cf-xzpc9\" (UID: \"3be11752-93fd-4edc-b100-0bfd29f599e8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425154 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmppf\" (UniqueName: \"kubernetes.io/projected/2b4f33b1-b5a3-4935-8036-deb97cfedfe7-kube-api-access-fmppf\") pod \"octavia-operator-controller-manager-69f8888797-m8rmj\" (UID: \"2b4f33b1-b5a3-4935-8036-deb97cfedfe7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57bd\" (UniqueName: \"kubernetes.io/projected/f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b-kube-api-access-l57bd\") pod \"neutron-operator-controller-manager-64ddbf8bb-wk47p\" (UID: \"f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.425208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvvb\" (UniqueName: \"kubernetes.io/projected/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-kube-api-access-qvvvb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.443920 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.445749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9tn\" (UniqueName: \"kubernetes.io/projected/a1759f72-1644-42e2-9b67-01478800870b-kube-api-access-xc9tn\") pod \"mariadb-operator-controller-manager-6994f66f48-hw4fs\" (UID: \"a1759f72-1644-42e2-9b67-01478800870b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.451092 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.451142 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.454262 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.457676 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dwhlz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.458777 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.495723 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.499550 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.519116 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.520192 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.522178 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gm62b" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.531744 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.533705 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9q2l\" (UniqueName: \"kubernetes.io/projected/60e0dd0a-0055-45ec-8a4c-f0c23cd214b6-kube-api-access-d9q2l\") pod \"ovn-operator-controller-manager-d44cf6b75-946kc\" (UID: \"60e0dd0a-0055-45ec-8a4c-f0c23cd214b6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.533831 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r26f\" (UniqueName: \"kubernetes.io/projected/196fc76c-2c5d-45ec-8106-4d0a3382d16e-kube-api-access-5r26f\") pod \"placement-operator-controller-manager-8497b45c89-9tnrt\" (UID: \"196fc76c-2c5d-45ec-8106-4d0a3382d16e\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.533941 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.534030 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j66h6\" (UniqueName: \"kubernetes.io/projected/3be11752-93fd-4edc-b100-0bfd29f599e8-kube-api-access-j66h6\") pod \"nova-operator-controller-manager-567668f5cf-xzpc9\" (UID: \"3be11752-93fd-4edc-b100-0bfd29f599e8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.534144 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmppf\" (UniqueName: \"kubernetes.io/projected/2b4f33b1-b5a3-4935-8036-deb97cfedfe7-kube-api-access-fmppf\") pod \"octavia-operator-controller-manager-69f8888797-m8rmj\" (UID: \"2b4f33b1-b5a3-4935-8036-deb97cfedfe7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.534260 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57bd\" (UniqueName: \"kubernetes.io/projected/f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b-kube-api-access-l57bd\") pod \"neutron-operator-controller-manager-64ddbf8bb-wk47p\" (UID: \"f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.534397 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvvb\" (UniqueName: \"kubernetes.io/projected/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-kube-api-access-qvvvb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.534562 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj2k\" (UniqueName: \"kubernetes.io/projected/5d77ae74-7238-4c9f-8ae1-33064d8824c2-kube-api-access-swj2k\") pod \"swift-operator-controller-manager-68f46476f-s9vk2\" (UID: \"5d77ae74-7238-4c9f-8ae1-33064d8824c2\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.536726 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.536778 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert podName:a9614d13-aca5-4ffa-9cc1-dd8767e11ac4 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:03.03676063 +0000 UTC m=+903.360467591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" (UID: "a9614d13-aca5-4ffa-9cc1-dd8767e11ac4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.567946 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57bd\" (UniqueName: \"kubernetes.io/projected/f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b-kube-api-access-l57bd\") pod \"neutron-operator-controller-manager-64ddbf8bb-wk47p\" (UID: \"f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.577997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmppf\" (UniqueName: \"kubernetes.io/projected/2b4f33b1-b5a3-4935-8036-deb97cfedfe7-kube-api-access-fmppf\") pod \"octavia-operator-controller-manager-69f8888797-m8rmj\" (UID: \"2b4f33b1-b5a3-4935-8036-deb97cfedfe7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.582900 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9q2l\" (UniqueName: \"kubernetes.io/projected/60e0dd0a-0055-45ec-8a4c-f0c23cd214b6-kube-api-access-d9q2l\") pod \"ovn-operator-controller-manager-d44cf6b75-946kc\" (UID: \"60e0dd0a-0055-45ec-8a4c-f0c23cd214b6\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.584822 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-29v4v"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.586243 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j66h6\" (UniqueName: \"kubernetes.io/projected/3be11752-93fd-4edc-b100-0bfd29f599e8-kube-api-access-j66h6\") pod \"nova-operator-controller-manager-567668f5cf-xzpc9\" (UID: \"3be11752-93fd-4edc-b100-0bfd29f599e8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.587256 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.589230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvvb\" (UniqueName: \"kubernetes.io/projected/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-kube-api-access-qvvvb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.589387 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wz7s6" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.592962 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.602575 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.615560 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.618856 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-29v4v"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.636772 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r26f\" (UniqueName: \"kubernetes.io/projected/196fc76c-2c5d-45ec-8106-4d0a3382d16e-kube-api-access-5r26f\") pod \"placement-operator-controller-manager-8497b45c89-9tnrt\" (UID: \"196fc76c-2c5d-45ec-8106-4d0a3382d16e\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.637013 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pfx\" (UniqueName: \"kubernetes.io/projected/da795bac-53b5-415b-9297-26e5502fceb8-kube-api-access-h7pfx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-dz8t2\" (UID: \"da795bac-53b5-415b-9297-26e5502fceb8\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.637188 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj2k\" (UniqueName: \"kubernetes.io/projected/5d77ae74-7238-4c9f-8ae1-33064d8824c2-kube-api-access-swj2k\") pod \"swift-operator-controller-manager-68f46476f-s9vk2\" (UID: \"5d77ae74-7238-4c9f-8ae1-33064d8824c2\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.650796 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.659805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj2k\" (UniqueName: \"kubernetes.io/projected/5d77ae74-7238-4c9f-8ae1-33064d8824c2-kube-api-access-swj2k\") pod \"swift-operator-controller-manager-68f46476f-s9vk2\" (UID: \"5d77ae74-7238-4c9f-8ae1-33064d8824c2\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.667869 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r26f\" (UniqueName: \"kubernetes.io/projected/196fc76c-2c5d-45ec-8106-4d0a3382d16e-kube-api-access-5r26f\") pod \"placement-operator-controller-manager-8497b45c89-9tnrt\" (UID: \"196fc76c-2c5d-45ec-8106-4d0a3382d16e\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.688043 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.739586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7pfx\" (UniqueName: \"kubernetes.io/projected/da795bac-53b5-415b-9297-26e5502fceb8-kube-api-access-h7pfx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-dz8t2\" (UID: \"da795bac-53b5-415b-9297-26e5502fceb8\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.739654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6j5\" (UniqueName: \"kubernetes.io/projected/21f8cf30-0215-4501-af0f-ff1220d4252b-kube-api-access-2v6j5\") pod \"test-operator-controller-manager-7866795846-29v4v\" (UID: \"21f8cf30-0215-4501-af0f-ff1220d4252b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.739700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.739923 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: E0216 23:01:02.739989 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:03.739974447 +0000 UTC m=+904.063681408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.743559 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.744601 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.748265 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.752083 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6mvxh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.769782 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7pfx\" (UniqueName: \"kubernetes.io/projected/da795bac-53b5-415b-9297-26e5502fceb8-kube-api-access-h7pfx\") pod \"telemetry-operator-controller-manager-7f45b4ff68-dz8t2\" (UID: \"da795bac-53b5-415b-9297-26e5502fceb8\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.770809 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.787777 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.802103 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.803609 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.812704 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.813192 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.813652 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-l7hxr" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.822115 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.839957 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.841299 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.841624 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmk7z\" (UniqueName: \"kubernetes.io/projected/1cfcc69c-1d21-4b1e-894d-d3ae72c39513-kube-api-access-zmk7z\") pod \"watcher-operator-controller-manager-5db88f68c-m62lh\" (UID: \"1cfcc69c-1d21-4b1e-894d-d3ae72c39513\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.841696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6j5\" (UniqueName: \"kubernetes.io/projected/21f8cf30-0215-4501-af0f-ff1220d4252b-kube-api-access-2v6j5\") pod \"test-operator-controller-manager-7866795846-29v4v\" (UID: \"21f8cf30-0215-4501-af0f-ff1220d4252b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.844704 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nxbxh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.851681 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.868496 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6j5\" (UniqueName: \"kubernetes.io/projected/21f8cf30-0215-4501-af0f-ff1220d4252b-kube-api-access-2v6j5\") pod \"test-operator-controller-manager-7866795846-29v4v\" (UID: \"21f8cf30-0215-4501-af0f-ff1220d4252b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.877960 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98"] Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.922558 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.944519 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.947062 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.947111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6wn\" (UniqueName: \"kubernetes.io/projected/f0d444ee-7bd9-40ed-ab3a-766aa716336c-kube-api-access-vd6wn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxt7q\" (UID: \"f0d444ee-7bd9-40ed-ab3a-766aa716336c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.947146 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmk7z\" (UniqueName: \"kubernetes.io/projected/1cfcc69c-1d21-4b1e-894d-d3ae72c39513-kube-api-access-zmk7z\") pod \"watcher-operator-controller-manager-5db88f68c-m62lh\" (UID: \"1cfcc69c-1d21-4b1e-894d-d3ae72c39513\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.947203 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5grp\" (UniqueName: \"kubernetes.io/projected/24704625-9cce-4f47-847c-ab4d95d3adb1-kube-api-access-r5grp\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.960803 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:02 crc kubenswrapper[4865]: I0216 23:01:02.978859 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmk7z\" (UniqueName: \"kubernetes.io/projected/1cfcc69c-1d21-4b1e-894d-d3ae72c39513-kube-api-access-zmk7z\") pod \"watcher-operator-controller-manager-5db88f68c-m62lh\" (UID: \"1cfcc69c-1d21-4b1e-894d-d3ae72c39513\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.049634 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.049696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.049725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.049748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6wn\" (UniqueName: \"kubernetes.io/projected/f0d444ee-7bd9-40ed-ab3a-766aa716336c-kube-api-access-vd6wn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxt7q\" (UID: \"f0d444ee-7bd9-40ed-ab3a-766aa716336c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.049776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5grp\" (UniqueName: \"kubernetes.io/projected/24704625-9cce-4f47-847c-ab4d95d3adb1-kube-api-access-r5grp\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.049824 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.049921 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:03.549900311 +0000 UTC m=+903.873607272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "metrics-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.050192 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.050225 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:03.55021795 +0000 UTC m=+903.873924911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.050271 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.050313 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert podName:a9614d13-aca5-4ffa-9cc1-dd8767e11ac4 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:04.050307293 +0000 UTC m=+904.374014254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" (UID: "a9614d13-aca5-4ffa-9cc1-dd8767e11ac4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.060381 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.070117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5grp\" (UniqueName: \"kubernetes.io/projected/24704625-9cce-4f47-847c-ab4d95d3adb1-kube-api-access-r5grp\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.070612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6wn\" (UniqueName: \"kubernetes.io/projected/f0d444ee-7bd9-40ed-ab3a-766aa716336c-kube-api-access-vd6wn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxt7q\" (UID: \"f0d444ee-7bd9-40ed-ab3a-766aa716336c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.071852 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.079061 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.099743 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.108306 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ac3013_99f7_4aef_b85d_8675044accc6.slice/crio-e5e60aa01a761fbfd4ec689dd160bad89d1481dd6ec5192c4e7e7c184e32b4dd WatchSource:0}: Error finding container e5e60aa01a761fbfd4ec689dd160bad89d1481dd6ec5192c4e7e7c184e32b4dd: Status 404 returned error can't find the container with id e5e60aa01a761fbfd4ec689dd160bad89d1481dd6ec5192c4e7e7c184e32b4dd Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.111910 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829ee3ed_5827_46ee_8399_f0b82ffa4d1d.slice/crio-7f66f67e5e80f95ec76b5fb612c3218dd095e61c73bd7eb9e817e466abb89536 WatchSource:0}: Error finding container 7f66f67e5e80f95ec76b5fb612c3218dd095e61c73bd7eb9e817e466abb89536: Status 404 returned error can't find the container with id 7f66f67e5e80f95ec76b5fb612c3218dd095e61c73bd7eb9e817e466abb89536 Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.112571 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a4e76a0_aa8b_4ee8_b7a8_dc43a376c4ec.slice/crio-5940482038aa2150a4bb526b720a7ea23ea09997badd4f96f5ebb61a679a524d WatchSource:0}: Error finding container 5940482038aa2150a4bb526b720a7ea23ea09997badd4f96f5ebb61a679a524d: Status 404 returned error can't find the container with id 5940482038aa2150a4bb526b720a7ea23ea09997badd4f96f5ebb61a679a524d Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.202778 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.399613 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" event={"ID":"829ee3ed-5827-46ee-8399-f0b82ffa4d1d","Type":"ContainerStarted","Data":"7f66f67e5e80f95ec76b5fb612c3218dd095e61c73bd7eb9e817e466abb89536"} Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.403528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" event={"ID":"3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec","Type":"ContainerStarted","Data":"5940482038aa2150a4bb526b720a7ea23ea09997badd4f96f5ebb61a679a524d"} Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.408521 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" event={"ID":"a5025501-39c8-43ae-8b94-3a555517b1f7","Type":"ContainerStarted","Data":"617cacff2b8985d906b143e2332ad6b3dab6d52680155995eb573c1c7c1c5d90"} Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.411755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" event={"ID":"61ac3013-99f7-4aef-b85d-8675044accc6","Type":"ContainerStarted","Data":"e5e60aa01a761fbfd4ec689dd160bad89d1481dd6ec5192c4e7e7c184e32b4dd"} Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.458260 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.463657 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1b2a884_8e78_47ac_9c45_7861a81e02d4.slice/crio-e585d6276083be25ad642ece30b122084e82de670dabb8949dddee0f2785de64 WatchSource:0}: Error finding container e585d6276083be25ad642ece30b122084e82de670dabb8949dddee0f2785de64: Status 404 returned error can't find the container with id e585d6276083be25ad642ece30b122084e82de670dabb8949dddee0f2785de64 Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.463872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.466740 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e02b9a_66d8_4fb4_bc3d_13610563b6e4.slice/crio-ae59fda1147ea35d0860e8010c16ebee997e145771d8cd5aa6ad8edfc78846ab WatchSource:0}: Error finding container ae59fda1147ea35d0860e8010c16ebee997e145771d8cd5aa6ad8edfc78846ab: Status 404 returned error can't find the container with id ae59fda1147ea35d0860e8010c16ebee997e145771d8cd5aa6ad8edfc78846ab Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.560950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.561065 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be11752_93fd_4edc_b100_0bfd29f599e8.slice/crio-dab4d2403751a4c3f5a07704f74fc70ebf114eb32aeab1834375bab7cc56b69d WatchSource:0}: Error finding container dab4d2403751a4c3f5a07704f74fc70ebf114eb32aeab1834375bab7cc56b69d: Status 404 returned error can't find the container with id dab4d2403751a4c3f5a07704f74fc70ebf114eb32aeab1834375bab7cc56b69d Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.561297 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.561386 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.561568 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.561642 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:04.561623303 +0000 UTC m=+904.885330264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.561962 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.562051 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:04.562025935 +0000 UTC m=+904.885732896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "metrics-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.567651 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b414dd_a0c6_488a_b253_1a3f477cb7a8.slice/crio-1ee21bbfa74ac1f0df35e3dc55802bbbb003bb5c83556873719059c74033c927 WatchSource:0}: Error finding container 1ee21bbfa74ac1f0df35e3dc55802bbbb003bb5c83556873719059c74033c927: Status 404 returned error can't find the container with id 1ee21bbfa74ac1f0df35e3dc55802bbbb003bb5c83556873719059c74033c927 Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.576203 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.665491 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.684071 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.692111 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4f33b1_b5a3_4935_8036_deb97cfedfe7.slice/crio-aee314f742f6d138af7eb8347a15592c050fafa99f1b0eed88c7d02ea9d47811 WatchSource:0}: Error finding container aee314f742f6d138af7eb8347a15592c050fafa99f1b0eed88c7d02ea9d47811: Status 404 returned error can't find the container with id aee314f742f6d138af7eb8347a15592c050fafa99f1b0eed88c7d02ea9d47811 Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.692625 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.696583 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddded450f_3a37_48b0_84fc_1de3c64c1954.slice/crio-396986665f9a2ae8625f299303e2ee80e1af988b61dce6638064c99821e4c84f WatchSource:0}: Error finding container 396986665f9a2ae8625f299303e2ee80e1af988b61dce6638064c99821e4c84f: Status 404 returned error can't find the container with id 396986665f9a2ae8625f299303e2ee80e1af988b61dce6638064c99821e4c84f Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.699111 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.704045 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e7b18d_0e13_4ef4_a4e2_d10b5f55763b.slice/crio-b30dc542b71dca0b7cbf1969447e73effbecb9083282091bde2f5e77ba94c401 WatchSource:0}: Error finding container b30dc542b71dca0b7cbf1969447e73effbecb9083282091bde2f5e77ba94c401: Status 404 returned error can't find the container with id b30dc542b71dca0b7cbf1969447e73effbecb9083282091bde2f5e77ba94c401 Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.705269 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.718665 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1759f72_1644_42e2_9b67_01478800870b.slice/crio-b1bffabb1de991dc74d9c481698cb89e161b30ab15064a72c7d11c22dc1d81ea WatchSource:0}: Error finding container b1bffabb1de991dc74d9c481698cb89e161b30ab15064a72c7d11c22dc1d81ea: Status 404 returned error can't find the container with id b1bffabb1de991dc74d9c481698cb89e161b30ab15064a72c7d11c22dc1d81ea Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.721542 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.764549 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.764789 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.764910 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:05.764880872 +0000 UTC m=+906.088587833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.876394 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.886343 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d77ae74_7238_4c9f_8ae1_33064d8824c2.slice/crio-201566c2bcd88a7c90dfcc91304afd53a330532976faad5b5c2b7595a8e0f163 WatchSource:0}: Error finding container 201566c2bcd88a7c90dfcc91304afd53a330532976faad5b5c2b7595a8e0f163: Status 404 returned error can't find the container with id 201566c2bcd88a7c90dfcc91304afd53a330532976faad5b5c2b7595a8e0f163 Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.886996 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmk7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-m62lh_openstack-operators(1cfcc69c-1d21-4b1e-894d-d3ae72c39513): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.887230 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-29v4v"] Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.888443 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" podUID="1cfcc69c-1d21-4b1e-894d-d3ae72c39513" Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.889722 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda795bac_53b5_415b_9297_26e5502fceb8.slice/crio-ffe44375e58a62c5337f4003704914f8229801aa919c61ca8a2756bc51a8876e WatchSource:0}: Error finding container ffe44375e58a62c5337f4003704914f8229801aa919c61ca8a2756bc51a8876e: Status 404 returned error can't find the container with id ffe44375e58a62c5337f4003704914f8229801aa919c61ca8a2756bc51a8876e Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.893592 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2"] Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.894567 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196fc76c_2c5d_45ec_8106_4d0a3382d16e.slice/crio-39ac175bf21354c6f256dbbf81fe73b7eb35f6f1daaa7d50b0cc79621af4a010 WatchSource:0}: Error finding container 39ac175bf21354c6f256dbbf81fe73b7eb35f6f1daaa7d50b0cc79621af4a010: Status 404 returned error can't find the container with id 39ac175bf21354c6f256dbbf81fe73b7eb35f6f1daaa7d50b0cc79621af4a010 Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.894668 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h7pfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-dz8t2_openstack-operators(da795bac-53b5-415b-9297-26e5502fceb8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.894788 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v6j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-29v4v_openstack-operators(21f8cf30-0215-4501-af0f-ff1220d4252b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.894899 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swj2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-s9vk2_openstack-operators(5d77ae74-7238-4c9f-8ae1-33064d8824c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.895749 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" podUID="da795bac-53b5-415b-9297-26e5502fceb8" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.896329 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" podUID="5d77ae74-7238-4c9f-8ae1-33064d8824c2" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.896392 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" podUID="21f8cf30-0215-4501-af0f-ff1220d4252b" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.897304 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5r26f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-9tnrt_openstack-operators(196fc76c-2c5d-45ec-8106-4d0a3382d16e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: W0216 23:01:03.898375 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d444ee_7bd9_40ed_ab3a_766aa716336c.slice/crio-31b9d363d12e4da1ebb36f6f79336b3c1c3824b52fdd00dd47d4d73d580575fc WatchSource:0}: Error finding container 31b9d363d12e4da1ebb36f6f79336b3c1c3824b52fdd00dd47d4d73d580575fc: Status 404 returned error can't find the container with id 31b9d363d12e4da1ebb36f6f79336b3c1c3824b52fdd00dd47d4d73d580575fc Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.898421 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" podUID="196fc76c-2c5d-45ec-8106-4d0a3382d16e" Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.901505 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vd6wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zxt7q_openstack-operators(f0d444ee-7bd9-40ed-ab3a-766aa716336c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.901927 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q"] Feb 16 23:01:03 crc kubenswrapper[4865]: E0216 23:01:03.902769 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podUID="f0d444ee-7bd9-40ed-ab3a-766aa716336c" Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.910705 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2"] Feb 16 23:01:03 crc kubenswrapper[4865]: I0216 23:01:03.919168 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt"] Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.073069 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.073331 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.073451 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert podName:a9614d13-aca5-4ffa-9cc1-dd8767e11ac4 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:06.073420626 +0000 UTC m=+906.397127587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" (UID: "a9614d13-aca5-4ffa-9cc1-dd8767e11ac4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.426160 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" event={"ID":"196fc76c-2c5d-45ec-8106-4d0a3382d16e","Type":"ContainerStarted","Data":"39ac175bf21354c6f256dbbf81fe73b7eb35f6f1daaa7d50b0cc79621af4a010"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.428268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" event={"ID":"dc7842ab-52e5-4223-8b2a-ab09641bf297","Type":"ContainerStarted","Data":"68cd6d92b0c96e69f97d96edaa9bfc4396beac4c72548a69abd1231543e4db79"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.428652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" podUID="196fc76c-2c5d-45ec-8106-4d0a3382d16e" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.432229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" event={"ID":"5d77ae74-7238-4c9f-8ae1-33064d8824c2","Type":"ContainerStarted","Data":"201566c2bcd88a7c90dfcc91304afd53a330532976faad5b5c2b7595a8e0f163"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.433757 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" podUID="5d77ae74-7238-4c9f-8ae1-33064d8824c2" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.435159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" event={"ID":"60e0dd0a-0055-45ec-8a4c-f0c23cd214b6","Type":"ContainerStarted","Data":"41e99a805f16473f176ff36c3f90dd05d9466bb6fa01459b199afc0d52b09589"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.438343 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" event={"ID":"f1b2a884-8e78-47ac-9c45-7861a81e02d4","Type":"ContainerStarted","Data":"e585d6276083be25ad642ece30b122084e82de670dabb8949dddee0f2785de64"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.462438 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" event={"ID":"3be11752-93fd-4edc-b100-0bfd29f599e8","Type":"ContainerStarted","Data":"dab4d2403751a4c3f5a07704f74fc70ebf114eb32aeab1834375bab7cc56b69d"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.466635 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" event={"ID":"68b414dd-a0c6-488a-b253-1a3f477cb7a8","Type":"ContainerStarted","Data":"1ee21bbfa74ac1f0df35e3dc55802bbbb003bb5c83556873719059c74033c927"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.470464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" event={"ID":"73e02b9a-66d8-4fb4-bc3d-13610563b6e4","Type":"ContainerStarted","Data":"ae59fda1147ea35d0860e8010c16ebee997e145771d8cd5aa6ad8edfc78846ab"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.472112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" event={"ID":"f0d444ee-7bd9-40ed-ab3a-766aa716336c","Type":"ContainerStarted","Data":"31b9d363d12e4da1ebb36f6f79336b3c1c3824b52fdd00dd47d4d73d580575fc"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.474551 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podUID="f0d444ee-7bd9-40ed-ab3a-766aa716336c" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.474584 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" event={"ID":"f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b","Type":"ContainerStarted","Data":"b30dc542b71dca0b7cbf1969447e73effbecb9083282091bde2f5e77ba94c401"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.476151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" event={"ID":"da795bac-53b5-415b-9297-26e5502fceb8","Type":"ContainerStarted","Data":"ffe44375e58a62c5337f4003704914f8229801aa919c61ca8a2756bc51a8876e"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.477334 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" podUID="da795bac-53b5-415b-9297-26e5502fceb8" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.477894 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" event={"ID":"21f8cf30-0215-4501-af0f-ff1220d4252b","Type":"ContainerStarted","Data":"7bec00c9d4759e6277300a2808e656f3dc91e3256005bf03626ef1bb78e15532"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.483191 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" podUID="21f8cf30-0215-4501-af0f-ff1220d4252b" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.496968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" event={"ID":"2b4f33b1-b5a3-4935-8036-deb97cfedfe7","Type":"ContainerStarted","Data":"aee314f742f6d138af7eb8347a15592c050fafa99f1b0eed88c7d02ea9d47811"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.500548 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" event={"ID":"dded450f-3a37-48b0-84fc-1de3c64c1954","Type":"ContainerStarted","Data":"396986665f9a2ae8625f299303e2ee80e1af988b61dce6638064c99821e4c84f"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.507541 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" event={"ID":"a1759f72-1644-42e2-9b67-01478800870b","Type":"ContainerStarted","Data":"b1bffabb1de991dc74d9c481698cb89e161b30ab15064a72c7d11c22dc1d81ea"} Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.509540 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" event={"ID":"1cfcc69c-1d21-4b1e-894d-d3ae72c39513","Type":"ContainerStarted","Data":"01b748ee8e620003a2d498ec7b6c0e68f0d5473c2320099851062711252240f9"} Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.511257 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" podUID="1cfcc69c-1d21-4b1e-894d-d3ae72c39513" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.582504 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:04 crc kubenswrapper[4865]: I0216 23:01:04.582582 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.582764 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.582786 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.582851 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:06.582831183 +0000 UTC m=+906.906538144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:04 crc kubenswrapper[4865]: E0216 23:01:04.582942 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:06.582921756 +0000 UTC m=+906.906628717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "metrics-server-cert" not found Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.545702 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" podUID="5d77ae74-7238-4c9f-8ae1-33064d8824c2" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.546063 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" podUID="21f8cf30-0215-4501-af0f-ff1220d4252b" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.545732 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" podUID="196fc76c-2c5d-45ec-8106-4d0a3382d16e" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.545730 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podUID="f0d444ee-7bd9-40ed-ab3a-766aa716336c" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.547651 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" podUID="1cfcc69c-1d21-4b1e-894d-d3ae72c39513" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.548895 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" podUID="da795bac-53b5-415b-9297-26e5502fceb8" Feb 16 23:01:05 crc kubenswrapper[4865]: I0216 23:01:05.812420 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.812619 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:05 crc kubenswrapper[4865]: E0216 23:01:05.812726 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:09.812692014 +0000 UTC m=+910.136398975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: I0216 23:01:06.119857 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.120093 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.120240 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert podName:a9614d13-aca5-4ffa-9cc1-dd8767e11ac4 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:10.120214424 +0000 UTC m=+910.443921385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" (UID: "a9614d13-aca5-4ffa-9cc1-dd8767e11ac4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: I0216 23:01:06.632021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:06 crc kubenswrapper[4865]: I0216 23:01:06.632102 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.632245 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.632347 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:10.632327059 +0000 UTC m=+910.956034020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "metrics-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.632364 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:06 crc kubenswrapper[4865]: E0216 23:01:06.632461 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:10.632434582 +0000 UTC m=+910.956141543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:09 crc kubenswrapper[4865]: I0216 23:01:09.912473 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:09 crc kubenswrapper[4865]: E0216 23:01:09.913489 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:09 crc kubenswrapper[4865]: E0216 23:01:09.913611 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:17.913574464 +0000 UTC m=+918.237281455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: I0216 23:01:10.220053 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.220316 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.220440 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert podName:a9614d13-aca5-4ffa-9cc1-dd8767e11ac4 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:18.220413822 +0000 UTC m=+918.544120793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" (UID: "a9614d13-aca5-4ffa-9cc1-dd8767e11ac4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: I0216 23:01:10.735442 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:10 crc kubenswrapper[4865]: I0216 23:01:10.735637 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.735847 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.735942 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:18.735918023 +0000 UTC m=+919.059624984 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "metrics-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.736459 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:10 crc kubenswrapper[4865]: E0216 23:01:10.736511 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:18.736501489 +0000 UTC m=+919.060208450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:15 crc kubenswrapper[4865]: I0216 23:01:15.664136 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:01:15 crc kubenswrapper[4865]: I0216 23:01:15.665324 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:01:16 crc kubenswrapper[4865]: E0216 23:01:16.435439 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 23:01:16 crc kubenswrapper[4865]: E0216 23:01:16.435726 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrd49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-stf5r_openstack-operators(68b414dd-a0c6-488a-b253-1a3f477cb7a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:01:16 crc kubenswrapper[4865]: E0216 23:01:16.437104 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" podUID="68b414dd-a0c6-488a-b253-1a3f477cb7a8" Feb 16 23:01:16 crc kubenswrapper[4865]: E0216 23:01:16.652334 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" podUID="68b414dd-a0c6-488a-b253-1a3f477cb7a8" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.658784 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" event={"ID":"f1b2a884-8e78-47ac-9c45-7861a81e02d4","Type":"ContainerStarted","Data":"8c7c3c746d40ccdf05aaeaaad134d374495c4f5c65c687a2aff2ba90245c2d12"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.659705 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.665844 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" event={"ID":"a5025501-39c8-43ae-8b94-3a555517b1f7","Type":"ContainerStarted","Data":"678c42a9ae5e265e0a49a5f01548464fb53e21e6be8f147d5e6ce3e93f174b03"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.666239 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.673460 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" event={"ID":"3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec","Type":"ContainerStarted","Data":"d54c5fcb495640c2b5c128a14618b2ea9059cc0fd8260b5fb6a4623426355e16"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.673864 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.679624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" event={"ID":"dc7842ab-52e5-4223-8b2a-ab09641bf297","Type":"ContainerStarted","Data":"a819ddb81482596697357dac209b51bd9c86962cf224e8fd416760ef4e6eb8ee"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.679974 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.690004 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" event={"ID":"60e0dd0a-0055-45ec-8a4c-f0c23cd214b6","Type":"ContainerStarted","Data":"ce9a3d34ae8cff551bd25845eacfbf009cb7507dc12c2380208cdd3e90331ab5"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.690197 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.695987 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" podStartSLOduration=3.679285042 podStartE2EDuration="16.695965654s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.468499129 +0000 UTC m=+903.792206090" lastFinishedPulling="2026-02-16 23:01:16.485179741 +0000 UTC m=+916.808886702" observedRunningTime="2026-02-16 23:01:17.689308806 +0000 UTC m=+918.013015767" watchObservedRunningTime="2026-02-16 23:01:17.695965654 +0000 UTC m=+918.019672615" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.698055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" event={"ID":"829ee3ed-5827-46ee-8399-f0b82ffa4d1d","Type":"ContainerStarted","Data":"8682e9ec979e22629fdd091e3bcdf76c06da387906e1ca3ab398e973927386be"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.698214 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.703212 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" event={"ID":"a1759f72-1644-42e2-9b67-01478800870b","Type":"ContainerStarted","Data":"5f5a17c94a039cc6c827ec66100d4d7c08e07e6ac12fccc2bb898abe67382b0f"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.704165 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.706657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" event={"ID":"3be11752-93fd-4edc-b100-0bfd29f599e8","Type":"ContainerStarted","Data":"73f76484a7ff82772a2c98b648c63c910265da73611f91c362d1f0aab726cbf7"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.706823 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.717373 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" event={"ID":"61ac3013-99f7-4aef-b85d-8675044accc6","Type":"ContainerStarted","Data":"3115e07d71c331db0d035e64a205e88e55d4fc917c920b21a78e4769334d35ae"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.718293 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.724085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" event={"ID":"f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b","Type":"ContainerStarted","Data":"ad878e9de107d66812e1b3a139f4eb232498fc484e7dcfc4a28378d3dec8d597"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.724546 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.732629 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" event={"ID":"73e02b9a-66d8-4fb4-bc3d-13610563b6e4","Type":"ContainerStarted","Data":"2356bd881e37449e479bdaaa7ddf28b7d8237e9f872dfa7ef9f9de8ff934b48a"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.732806 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.740640 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" event={"ID":"2b4f33b1-b5a3-4935-8036-deb97cfedfe7","Type":"ContainerStarted","Data":"50b711ec16f01c51d0b2ab5d3c01d01c21544a89473c406919799d20d05f1050"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.741245 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.746298 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" podStartSLOduration=3.956489133 podStartE2EDuration="16.746259259s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.696569897 +0000 UTC m=+904.020276858" lastFinishedPulling="2026-02-16 23:01:16.486339783 +0000 UTC m=+916.810046984" observedRunningTime="2026-02-16 23:01:17.741952428 +0000 UTC m=+918.065659389" watchObservedRunningTime="2026-02-16 23:01:17.746259259 +0000 UTC m=+918.069966220" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.747218 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" podStartSLOduration=3.179503815 podStartE2EDuration="16.747212646s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:02.917020777 +0000 UTC m=+903.240727738" lastFinishedPulling="2026-02-16 23:01:16.484729618 +0000 UTC m=+916.808436569" observedRunningTime="2026-02-16 23:01:17.723623212 +0000 UTC m=+918.047330173" watchObservedRunningTime="2026-02-16 23:01:17.747212646 +0000 UTC m=+918.070919607" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.753200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" event={"ID":"dded450f-3a37-48b0-84fc-1de3c64c1954","Type":"ContainerStarted","Data":"ac023f67397e06c73acce7c4e1175d0d7da0a9a6feba7eb68d4e30722090c3d1"} Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.754122 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.777330 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" podStartSLOduration=3.4176173260000002 podStartE2EDuration="16.777308964s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.126726017 +0000 UTC m=+903.450432978" lastFinishedPulling="2026-02-16 23:01:16.486417655 +0000 UTC m=+916.810124616" observedRunningTime="2026-02-16 23:01:17.775841952 +0000 UTC m=+918.099548913" watchObservedRunningTime="2026-02-16 23:01:17.777308964 +0000 UTC m=+918.101015925" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.839481 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" podStartSLOduration=3.076911174 podStartE2EDuration="15.839461173s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.724038531 +0000 UTC m=+904.047745492" lastFinishedPulling="2026-02-16 23:01:16.48658852 +0000 UTC m=+916.810295491" observedRunningTime="2026-02-16 23:01:17.835472751 +0000 UTC m=+918.159179712" watchObservedRunningTime="2026-02-16 23:01:17.839461173 +0000 UTC m=+918.163168124" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.843088 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" podStartSLOduration=3.464125814 podStartE2EDuration="16.843080435s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.123376102 +0000 UTC m=+903.447083063" lastFinishedPulling="2026-02-16 23:01:16.502330703 +0000 UTC m=+916.826037684" observedRunningTime="2026-02-16 23:01:17.80843987 +0000 UTC m=+918.132146831" watchObservedRunningTime="2026-02-16 23:01:17.843080435 +0000 UTC m=+918.166787396" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.857541 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" podStartSLOduration=3.079174267 podStartE2EDuration="15.857533802s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.7083983 +0000 UTC m=+904.032105261" lastFinishedPulling="2026-02-16 23:01:16.486757835 +0000 UTC m=+916.810464796" observedRunningTime="2026-02-16 23:01:17.857299355 +0000 UTC m=+918.181006326" watchObservedRunningTime="2026-02-16 23:01:17.857533802 +0000 UTC m=+918.181240753" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.889677 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" podStartSLOduration=2.968497287 podStartE2EDuration="15.889657106s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.565886734 +0000 UTC m=+903.889593695" lastFinishedPulling="2026-02-16 23:01:16.487046513 +0000 UTC m=+916.810753514" observedRunningTime="2026-02-16 23:01:17.888034051 +0000 UTC m=+918.211741012" watchObservedRunningTime="2026-02-16 23:01:17.889657106 +0000 UTC m=+918.213364067" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.922269 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" podStartSLOduration=3.115123719 podStartE2EDuration="15.922244174s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.697173164 +0000 UTC m=+904.020880125" lastFinishedPulling="2026-02-16 23:01:16.504293609 +0000 UTC m=+916.828000580" observedRunningTime="2026-02-16 23:01:17.916215434 +0000 UTC m=+918.239922395" watchObservedRunningTime="2026-02-16 23:01:17.922244174 +0000 UTC m=+918.245951145" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.966835 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" podStartSLOduration=3.1925430280000002 podStartE2EDuration="15.966801838s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.710575831 +0000 UTC m=+904.034282792" lastFinishedPulling="2026-02-16 23:01:16.484834611 +0000 UTC m=+916.808541602" observedRunningTime="2026-02-16 23:01:17.94556402 +0000 UTC m=+918.269270981" watchObservedRunningTime="2026-02-16 23:01:17.966801838 +0000 UTC m=+918.290508799" Feb 16 23:01:17 crc kubenswrapper[4865]: I0216 23:01:17.983595 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:17 crc kubenswrapper[4865]: E0216 23:01:17.983963 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:17 crc kubenswrapper[4865]: E0216 23:01:17.984051 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert podName:812a9f63-a231-495c-9474-0c60929fabff nodeName:}" failed. No retries permitted until 2026-02-16 23:01:33.984023513 +0000 UTC m=+934.307730474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert") pod "infra-operator-controller-manager-79d975b745-mt6fh" (UID: "812a9f63-a231-495c-9474-0c60929fabff") : secret "infra-operator-webhook-server-cert" not found Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.020600 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" podStartSLOduration=3.244154411 podStartE2EDuration="16.020569851s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.71052123 +0000 UTC m=+904.034228191" lastFinishedPulling="2026-02-16 23:01:16.48693667 +0000 UTC m=+916.810643631" observedRunningTime="2026-02-16 23:01:18.002998457 +0000 UTC m=+918.326705418" watchObservedRunningTime="2026-02-16 23:01:18.020569851 +0000 UTC m=+918.344276812" Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.047130 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" podStartSLOduration=3.685367572 podStartE2EDuration="17.047096768s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.122997402 +0000 UTC m=+903.446704363" lastFinishedPulling="2026-02-16 23:01:16.484726598 +0000 UTC m=+916.808433559" observedRunningTime="2026-02-16 23:01:18.032372094 +0000 UTC m=+918.356079055" watchObservedRunningTime="2026-02-16 23:01:18.047096768 +0000 UTC m=+918.370803729" Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.063395 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" podStartSLOduration=4.047298371 podStartE2EDuration="17.063378606s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.469366793 +0000 UTC m=+903.793073754" lastFinishedPulling="2026-02-16 23:01:16.485447018 +0000 UTC m=+916.809153989" observedRunningTime="2026-02-16 23:01:18.061948576 +0000 UTC m=+918.385655537" watchObservedRunningTime="2026-02-16 23:01:18.063378606 +0000 UTC m=+918.387085557" Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.288713 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.317185 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9614d13-aca5-4ffa-9cc1-dd8767e11ac4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz\" (UID: \"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:18 crc kubenswrapper[4865]: I0216 23:01:18.594843 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:19 crc kubenswrapper[4865]: I0216 23:01:19.340862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:19 crc kubenswrapper[4865]: I0216 23:01:19.341113 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:19 crc kubenswrapper[4865]: E0216 23:01:19.343257 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 23:01:19 crc kubenswrapper[4865]: E0216 23:01:19.343453 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs podName:24704625-9cce-4f47-847c-ab4d95d3adb1 nodeName:}" failed. No retries permitted until 2026-02-16 23:01:35.343433629 +0000 UTC m=+935.667140590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs") pod "openstack-operator-controller-manager-85988dbd5c-sb7sh" (UID: "24704625-9cce-4f47-847c-ab4d95d3adb1") : secret "webhook-server-cert" not found Feb 16 23:01:19 crc kubenswrapper[4865]: I0216 23:01:19.373234 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-metrics-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:19 crc kubenswrapper[4865]: I0216 23:01:19.823432 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz"] Feb 16 23:01:20 crc kubenswrapper[4865]: W0216 23:01:20.407133 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9614d13_aca5_4ffa_9cc1_dd8767e11ac4.slice/crio-b3afc5388579d14b4846ed0fdb4ea4a2004613f04e71a82719630a03c75dc56f WatchSource:0}: Error finding container b3afc5388579d14b4846ed0fdb4ea4a2004613f04e71a82719630a03c75dc56f: Status 404 returned error can't find the container with id b3afc5388579d14b4846ed0fdb4ea4a2004613f04e71a82719630a03c75dc56f Feb 16 23:01:20 crc kubenswrapper[4865]: I0216 23:01:20.776498 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" event={"ID":"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4","Type":"ContainerStarted","Data":"b3afc5388579d14b4846ed0fdb4ea4a2004613f04e71a82719630a03c75dc56f"} Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.512178 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.514403 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.528373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.584553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.584643 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j6mc\" (UniqueName: \"kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.584688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.686206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.686326 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j6mc\" (UniqueName: \"kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.686364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.687142 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.687472 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.715461 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j6mc\" (UniqueName: \"kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc\") pod \"certified-operators-6k58q\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:21 crc kubenswrapper[4865]: I0216 23:01:21.840506 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.169184 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-7nt98" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.199053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-msxb8" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.250053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-5ntcr" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.259499 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kvmr9" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.291956 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jmmxl" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.368538 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vc4dc" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.441191 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wl4zd" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.503686 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-phdd6" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.599633 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-hw4fs" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.613842 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-wk47p" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.631728 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xzpc9" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.660940 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-m8rmj" Feb 16 23:01:22 crc kubenswrapper[4865]: I0216 23:01:22.700698 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-946kc" Feb 16 23:01:23 crc kubenswrapper[4865]: I0216 23:01:23.255950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:23 crc kubenswrapper[4865]: W0216 23:01:23.267902 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5bb6955_e6c9_4983_9c6c_9600c1e015fc.slice/crio-9d48369e4bb65d516126ab579d5c256cf003e5f6c6fb8419ef21b7788d725f3e WatchSource:0}: Error finding container 9d48369e4bb65d516126ab579d5c256cf003e5f6c6fb8419ef21b7788d725f3e: Status 404 returned error can't find the container with id 9d48369e4bb65d516126ab579d5c256cf003e5f6c6fb8419ef21b7788d725f3e Feb 16 23:01:23 crc kubenswrapper[4865]: I0216 23:01:23.810847 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerStarted","Data":"9d48369e4bb65d516126ab579d5c256cf003e5f6c6fb8419ef21b7788d725f3e"} Feb 16 23:01:24 crc kubenswrapper[4865]: I0216 23:01:24.820615 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" event={"ID":"21f8cf30-0215-4501-af0f-ff1220d4252b","Type":"ContainerStarted","Data":"d1e2a679b994ccb33916dd8b20d22d58ce2ed483fda6e51b396c55fca0a5b676"} Feb 16 23:01:24 crc kubenswrapper[4865]: I0216 23:01:24.823531 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" event={"ID":"5d77ae74-7238-4c9f-8ae1-33064d8824c2","Type":"ContainerStarted","Data":"18ce7078bdd80bf0deaff22d11e851909873face3bee6c75220ffa647756c0dd"} Feb 16 23:01:24 crc kubenswrapper[4865]: I0216 23:01:24.826335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" event={"ID":"1cfcc69c-1d21-4b1e-894d-d3ae72c39513","Type":"ContainerStarted","Data":"aa6deb11b8cb05559c22693dab2ed226da1e0e9c9f054858acaf6f488c5b78b1"} Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.836115 4865 generic.go:334] "Generic (PLEG): container finished" podID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerID="c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8" exitCode=0 Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.837013 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerDied","Data":"c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8"} Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.838375 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.882339 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" podStartSLOduration=4.824535663 podStartE2EDuration="23.882321534s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.894585677 +0000 UTC m=+904.218292638" lastFinishedPulling="2026-02-16 23:01:22.952371548 +0000 UTC m=+923.276078509" observedRunningTime="2026-02-16 23:01:25.880189994 +0000 UTC m=+926.203896965" watchObservedRunningTime="2026-02-16 23:01:25.882321534 +0000 UTC m=+926.206028495" Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.915668 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" podStartSLOduration=4.761437657 podStartE2EDuration="23.915641522s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.894700691 +0000 UTC m=+904.218407652" lastFinishedPulling="2026-02-16 23:01:23.048904556 +0000 UTC m=+923.372611517" observedRunningTime="2026-02-16 23:01:25.906200256 +0000 UTC m=+926.229907227" watchObservedRunningTime="2026-02-16 23:01:25.915641522 +0000 UTC m=+926.239348503" Feb 16 23:01:25 crc kubenswrapper[4865]: I0216 23:01:25.929580 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" podStartSLOduration=4.875653442 podStartE2EDuration="23.929548384s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.886816458 +0000 UTC m=+904.210523419" lastFinishedPulling="2026-02-16 23:01:22.9407114 +0000 UTC m=+923.264418361" observedRunningTime="2026-02-16 23:01:25.92231774 +0000 UTC m=+926.246024701" watchObservedRunningTime="2026-02-16 23:01:25.929548384 +0000 UTC m=+926.253255345" Feb 16 23:01:32 crc kubenswrapper[4865]: I0216 23:01:32.749385 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:32 crc kubenswrapper[4865]: I0216 23:01:32.752523 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s9vk2" Feb 16 23:01:32 crc kubenswrapper[4865]: I0216 23:01:32.964917 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-29v4v" Feb 16 23:01:33 crc kubenswrapper[4865]: I0216 23:01:33.100784 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:33 crc kubenswrapper[4865]: I0216 23:01:33.104511 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-m62lh" Feb 16 23:01:34 crc kubenswrapper[4865]: I0216 23:01:34.044960 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:34 crc kubenswrapper[4865]: I0216 23:01:34.053866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812a9f63-a231-495c-9474-0c60929fabff-cert\") pod \"infra-operator-controller-manager-79d975b745-mt6fh\" (UID: \"812a9f63-a231-495c-9474-0c60929fabff\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:34 crc kubenswrapper[4865]: I0216 23:01:34.199651 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.370520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.381001 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24704625-9cce-4f47-847c-ab4d95d3adb1-webhook-certs\") pod \"openstack-operator-controller-manager-85988dbd5c-sb7sh\" (UID: \"24704625-9cce-4f47-847c-ab4d95d3adb1\") " pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:35 crc kubenswrapper[4865]: W0216 23:01:35.565125 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812a9f63_a231_495c_9474_0c60929fabff.slice/crio-5ae145965fd3f800085f657288e9d62c52801929d52a2a2f0d264fd04d0b59e3 WatchSource:0}: Error finding container 5ae145965fd3f800085f657288e9d62c52801929d52a2a2f0d264fd04d0b59e3: Status 404 returned error can't find the container with id 5ae145965fd3f800085f657288e9d62c52801929d52a2a2f0d264fd04d0b59e3 Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.565845 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh"] Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.590527 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:35 crc kubenswrapper[4865]: E0216 23:01:35.781963 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 23:01:35 crc kubenswrapper[4865]: E0216 23:01:35.782223 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vd6wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zxt7q_openstack-operators(f0d444ee-7bd9-40ed-ab3a-766aa716336c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:01:35 crc kubenswrapper[4865]: E0216 23:01:35.783901 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podUID="f0d444ee-7bd9-40ed-ab3a-766aa716336c" Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.844720 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh"] Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.934908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" event={"ID":"24704625-9cce-4f47-847c-ab4d95d3adb1","Type":"ContainerStarted","Data":"bffd4595066473617182dcb43f3a543550f90d4d9ff82e0501f05607098bc338"} Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.936869 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" event={"ID":"a9614d13-aca5-4ffa-9cc1-dd8767e11ac4","Type":"ContainerStarted","Data":"3856260762fa040c42b88359f09866f764025d5dd5c271bb474ac07337757435"} Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.938412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" event={"ID":"196fc76c-2c5d-45ec-8106-4d0a3382d16e","Type":"ContainerStarted","Data":"84ee128dcb60a06bf37587c178a124ff3afc12783181348d584ca1a5f2924e18"} Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.939581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" event={"ID":"812a9f63-a231-495c-9474-0c60929fabff","Type":"ContainerStarted","Data":"5ae145965fd3f800085f657288e9d62c52801929d52a2a2f0d264fd04d0b59e3"} Feb 16 23:01:35 crc kubenswrapper[4865]: I0216 23:01:35.941193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" event={"ID":"da795bac-53b5-415b-9297-26e5502fceb8","Type":"ContainerStarted","Data":"c8be250b1782ad375575118e246c2773c549cd42877ea6edb4bcd6cf1a2dbf30"} Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.949116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerStarted","Data":"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7"} Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.951333 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" event={"ID":"68b414dd-a0c6-488a-b253-1a3f477cb7a8","Type":"ContainerStarted","Data":"d9a1926ac7d8fc266b07fd6b2aec46719547b92563fb659b3d337639f2415828"} Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.951540 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.953323 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" event={"ID":"24704625-9cce-4f47-847c-ab4d95d3adb1","Type":"ContainerStarted","Data":"d971696f6471a37c67fe6a1718cea6b44ca14ae7069d6cafa9f40ac28bace559"} Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.953355 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.953369 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.953754 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:36 crc kubenswrapper[4865]: I0216 23:01:36.954193 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.017497 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" podStartSLOduration=3.606604169 podStartE2EDuration="35.017472682s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.894560857 +0000 UTC m=+904.218267818" lastFinishedPulling="2026-02-16 23:01:35.30542933 +0000 UTC m=+935.629136331" observedRunningTime="2026-02-16 23:01:37.011139624 +0000 UTC m=+937.334846585" watchObservedRunningTime="2026-02-16 23:01:37.017472682 +0000 UTC m=+937.341179643" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.035519 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" podStartSLOduration=3.650893096 podStartE2EDuration="35.035487539s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.89717338 +0000 UTC m=+904.220880341" lastFinishedPulling="2026-02-16 23:01:35.281767813 +0000 UTC m=+935.605474784" observedRunningTime="2026-02-16 23:01:37.032995459 +0000 UTC m=+937.356702430" watchObservedRunningTime="2026-02-16 23:01:37.035487539 +0000 UTC m=+937.359194500" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.057132 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" podStartSLOduration=2.984150718 podStartE2EDuration="36.057108628s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.575107813 +0000 UTC m=+903.898814784" lastFinishedPulling="2026-02-16 23:01:36.648065693 +0000 UTC m=+936.971772694" observedRunningTime="2026-02-16 23:01:37.049799942 +0000 UTC m=+937.373506903" watchObservedRunningTime="2026-02-16 23:01:37.057108628 +0000 UTC m=+937.380815589" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.093262 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" podStartSLOduration=35.093221924 podStartE2EDuration="35.093221924s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:01:37.08310818 +0000 UTC m=+937.406815141" watchObservedRunningTime="2026-02-16 23:01:37.093221924 +0000 UTC m=+937.416928885" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.122821 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" podStartSLOduration=20.253148505 podStartE2EDuration="35.122798157s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:20.410882137 +0000 UTC m=+920.734589098" lastFinishedPulling="2026-02-16 23:01:35.280531759 +0000 UTC m=+935.604238750" observedRunningTime="2026-02-16 23:01:37.1175707 +0000 UTC m=+937.441277671" watchObservedRunningTime="2026-02-16 23:01:37.122798157 +0000 UTC m=+937.446505118" Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.963613 4865 generic.go:334] "Generic (PLEG): container finished" podID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerID="1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7" exitCode=0 Feb 16 23:01:37 crc kubenswrapper[4865]: I0216 23:01:37.963751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerDied","Data":"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7"} Feb 16 23:01:38 crc kubenswrapper[4865]: I0216 23:01:38.974986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerStarted","Data":"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424"} Feb 16 23:01:38 crc kubenswrapper[4865]: I0216 23:01:38.977782 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" event={"ID":"812a9f63-a231-495c-9474-0c60929fabff","Type":"ContainerStarted","Data":"a44b406c42694139bea6f189bc4e08bd4fc0a34b7bfee12f755bd0a28c9bd9a7"} Feb 16 23:01:38 crc kubenswrapper[4865]: I0216 23:01:38.978085 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:39 crc kubenswrapper[4865]: I0216 23:01:39.000300 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6k58q" podStartSLOduration=5.818654392 podStartE2EDuration="18.000257047s" podCreationTimestamp="2026-02-16 23:01:21 +0000 UTC" firstStartedPulling="2026-02-16 23:01:26.481739078 +0000 UTC m=+926.805446039" lastFinishedPulling="2026-02-16 23:01:38.663341723 +0000 UTC m=+938.987048694" observedRunningTime="2026-02-16 23:01:38.995314768 +0000 UTC m=+939.319021739" watchObservedRunningTime="2026-02-16 23:01:39.000257047 +0000 UTC m=+939.323964038" Feb 16 23:01:39 crc kubenswrapper[4865]: I0216 23:01:39.014149 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" podStartSLOduration=35.235897052 podStartE2EDuration="38.014117647s" podCreationTimestamp="2026-02-16 23:01:01 +0000 UTC" firstStartedPulling="2026-02-16 23:01:35.569078461 +0000 UTC m=+935.892785422" lastFinishedPulling="2026-02-16 23:01:38.347299056 +0000 UTC m=+938.671006017" observedRunningTime="2026-02-16 23:01:39.012069579 +0000 UTC m=+939.335776560" watchObservedRunningTime="2026-02-16 23:01:39.014117647 +0000 UTC m=+939.337824608" Feb 16 23:01:41 crc kubenswrapper[4865]: I0216 23:01:41.841748 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:41 crc kubenswrapper[4865]: I0216 23:01:41.842637 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:41 crc kubenswrapper[4865]: I0216 23:01:41.921009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:42 crc kubenswrapper[4865]: I0216 23:01:42.448548 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-stf5r" Feb 16 23:01:42 crc kubenswrapper[4865]: I0216 23:01:42.793630 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-9tnrt" Feb 16 23:01:42 crc kubenswrapper[4865]: I0216 23:01:42.937140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-dz8t2" Feb 16 23:01:43 crc kubenswrapper[4865]: I0216 23:01:43.075524 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:43 crc kubenswrapper[4865]: I0216 23:01:43.126601 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:44 crc kubenswrapper[4865]: I0216 23:01:44.213535 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mt6fh" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.037516 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6k58q" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="registry-server" containerID="cri-o://7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424" gracePeriod=2 Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.534006 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.591562 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j6mc\" (UniqueName: \"kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc\") pod \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.591686 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content\") pod \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.591776 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities\") pod \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\" (UID: \"e5bb6955-e6c9-4983-9c6c-9600c1e015fc\") " Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.603398 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85988dbd5c-sb7sh" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.604039 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc" (OuterVolumeSpecName: "kube-api-access-7j6mc") pod "e5bb6955-e6c9-4983-9c6c-9600c1e015fc" (UID: "e5bb6955-e6c9-4983-9c6c-9600c1e015fc"). InnerVolumeSpecName "kube-api-access-7j6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.606583 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities" (OuterVolumeSpecName: "utilities") pod "e5bb6955-e6c9-4983-9c6c-9600c1e015fc" (UID: "e5bb6955-e6c9-4983-9c6c-9600c1e015fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.664713 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.664796 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.689985 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5bb6955-e6c9-4983-9c6c-9600c1e015fc" (UID: "e5bb6955-e6c9-4983-9c6c-9600c1e015fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.695769 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j6mc\" (UniqueName: \"kubernetes.io/projected/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-kube-api-access-7j6mc\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.695820 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:45 crc kubenswrapper[4865]: I0216 23:01:45.695837 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5bb6955-e6c9-4983-9c6c-9600c1e015fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.050928 4865 generic.go:334] "Generic (PLEG): container finished" podID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerID="7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424" exitCode=0 Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.051004 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerDied","Data":"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424"} Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.051082 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6k58q" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.051130 4865 scope.go:117] "RemoveContainer" containerID="7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.051105 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6k58q" event={"ID":"e5bb6955-e6c9-4983-9c6c-9600c1e015fc","Type":"ContainerDied","Data":"9d48369e4bb65d516126ab579d5c256cf003e5f6c6fb8419ef21b7788d725f3e"} Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.086620 4865 scope.go:117] "RemoveContainer" containerID="1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.108009 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.121558 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6k58q"] Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.155973 4865 scope.go:117] "RemoveContainer" containerID="c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.179511 4865 scope.go:117] "RemoveContainer" containerID="7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424" Feb 16 23:01:46 crc kubenswrapper[4865]: E0216 23:01:46.180048 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424\": container with ID starting with 7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424 not found: ID does not exist" containerID="7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.180144 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424"} err="failed to get container status \"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424\": rpc error: code = NotFound desc = could not find container \"7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424\": container with ID starting with 7adb975b284984b2eb202b20c5a425939d2cb1062720098bd9ceb9d5151a0424 not found: ID does not exist" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.180212 4865 scope.go:117] "RemoveContainer" containerID="1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7" Feb 16 23:01:46 crc kubenswrapper[4865]: E0216 23:01:46.180788 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7\": container with ID starting with 1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7 not found: ID does not exist" containerID="1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.180843 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7"} err="failed to get container status \"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7\": rpc error: code = NotFound desc = could not find container \"1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7\": container with ID starting with 1b2c828583713c78f544fd9a865580e72c2b069e61456f9dcb3c04ab345e5cd7 not found: ID does not exist" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.180890 4865 scope.go:117] "RemoveContainer" containerID="c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8" Feb 16 23:01:46 crc kubenswrapper[4865]: E0216 23:01:46.181397 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8\": container with ID starting with c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8 not found: ID does not exist" containerID="c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.181446 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8"} err="failed to get container status \"c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8\": rpc error: code = NotFound desc = could not find container \"c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8\": container with ID starting with c1c9f6d1b60bb4e96def5020792383c0be56e4d073a6a1c1133e166e394224e8 not found: ID does not exist" Feb 16 23:01:46 crc kubenswrapper[4865]: I0216 23:01:46.429404 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" path="/var/lib/kubelet/pods/e5bb6955-e6c9-4983-9c6c-9600c1e015fc/volumes" Feb 16 23:01:48 crc kubenswrapper[4865]: I0216 23:01:48.607659 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz" Feb 16 23:01:51 crc kubenswrapper[4865]: E0216 23:01:51.419162 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podUID="f0d444ee-7bd9-40ed-ab3a-766aa716336c" Feb 16 23:02:03 crc kubenswrapper[4865]: I0216 23:02:03.417460 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:02:04 crc kubenswrapper[4865]: I0216 23:02:04.229363 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" event={"ID":"f0d444ee-7bd9-40ed-ab3a-766aa716336c","Type":"ContainerStarted","Data":"662bcee2630b280ae0145e970b1057329ce228167578f4f96a44f44b54aabb8c"} Feb 16 23:02:04 crc kubenswrapper[4865]: I0216 23:02:04.250795 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxt7q" podStartSLOduration=2.260496218 podStartE2EDuration="1m2.250771874s" podCreationTimestamp="2026-02-16 23:01:02 +0000 UTC" firstStartedPulling="2026-02-16 23:01:03.901310397 +0000 UTC m=+904.225017358" lastFinishedPulling="2026-02-16 23:02:03.891586053 +0000 UTC m=+964.215293014" observedRunningTime="2026-02-16 23:02:04.246118143 +0000 UTC m=+964.569825144" watchObservedRunningTime="2026-02-16 23:02:04.250771874 +0000 UTC m=+964.574478845" Feb 16 23:02:15 crc kubenswrapper[4865]: I0216 23:02:15.664521 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:02:15 crc kubenswrapper[4865]: I0216 23:02:15.665355 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:02:15 crc kubenswrapper[4865]: I0216 23:02:15.665435 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:02:15 crc kubenswrapper[4865]: I0216 23:02:15.666442 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:02:15 crc kubenswrapper[4865]: I0216 23:02:15.666524 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3" gracePeriod=600 Feb 16 23:02:16 crc kubenswrapper[4865]: I0216 23:02:16.355073 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3" exitCode=0 Feb 16 23:02:16 crc kubenswrapper[4865]: I0216 23:02:16.355132 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3"} Feb 16 23:02:16 crc kubenswrapper[4865]: I0216 23:02:16.355747 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0"} Feb 16 23:02:16 crc kubenswrapper[4865]: I0216 23:02:16.355788 4865 scope.go:117] "RemoveContainer" containerID="d2cb7613b13b28970e25e6a68bc3fc59b2c15f74fd56553d326fa4f7962e6c46" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.614711 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:21 crc kubenswrapper[4865]: E0216 23:02:21.615812 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="extract-content" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.615825 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="extract-content" Feb 16 23:02:21 crc kubenswrapper[4865]: E0216 23:02:21.615839 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="extract-utilities" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.615845 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="extract-utilities" Feb 16 23:02:21 crc kubenswrapper[4865]: E0216 23:02:21.615872 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="registry-server" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.615879 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="registry-server" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.616005 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5bb6955-e6c9-4983-9c6c-9600c1e015fc" containerName="registry-server" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.657168 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.657375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.670506 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.670665 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.672489 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.673230 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wfk7f" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.693716 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.696799 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.701682 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.729387 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.810676 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.810747 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4kp\" (UniqueName: \"kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.810995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzd85\" (UniqueName: \"kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.811070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.811388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.912715 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.912807 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.912841 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4kp\" (UniqueName: \"kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.912868 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzd85\" (UniqueName: \"kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.912887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.914008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.914017 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.914669 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.934268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzd85\" (UniqueName: \"kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85\") pod \"dnsmasq-dns-78dd6ddcc-sj5b5\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.934947 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4kp\" (UniqueName: \"kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp\") pod \"dnsmasq-dns-675f4bcbfc-mgvsj\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:21 crc kubenswrapper[4865]: I0216 23:02:21.999842 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:22 crc kubenswrapper[4865]: I0216 23:02:22.016246 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:22 crc kubenswrapper[4865]: I0216 23:02:22.387505 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:22 crc kubenswrapper[4865]: W0216 23:02:22.388400 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfba4661a_c5f3_4822_ad11_3b755cbdac28.slice/crio-a4852e91b0004bd24c60dabff7368c1c2fe57ebe36fa474305e42de5e45266f2 WatchSource:0}: Error finding container a4852e91b0004bd24c60dabff7368c1c2fe57ebe36fa474305e42de5e45266f2: Status 404 returned error can't find the container with id a4852e91b0004bd24c60dabff7368c1c2fe57ebe36fa474305e42de5e45266f2 Feb 16 23:02:22 crc kubenswrapper[4865]: I0216 23:02:22.424572 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" event={"ID":"fba4661a-c5f3-4822-ad11-3b755cbdac28","Type":"ContainerStarted","Data":"a4852e91b0004bd24c60dabff7368c1c2fe57ebe36fa474305e42de5e45266f2"} Feb 16 23:02:22 crc kubenswrapper[4865]: I0216 23:02:22.563707 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:22 crc kubenswrapper[4865]: W0216 23:02:22.565973 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f8a691_42b2_47f6_9bf1_034f21f8e91c.slice/crio-1c00a0039abd34dbcdabb6ac412944d44e77957bb22f02e066df1715636acefe WatchSource:0}: Error finding container 1c00a0039abd34dbcdabb6ac412944d44e77957bb22f02e066df1715636acefe: Status 404 returned error can't find the container with id 1c00a0039abd34dbcdabb6ac412944d44e77957bb22f02e066df1715636acefe Feb 16 23:02:23 crc kubenswrapper[4865]: I0216 23:02:23.429593 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" event={"ID":"46f8a691-42b2-47f6-9bf1-034f21f8e91c","Type":"ContainerStarted","Data":"1c00a0039abd34dbcdabb6ac412944d44e77957bb22f02e066df1715636acefe"} Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.436833 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.452696 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.454017 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.473776 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.564040 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.564568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.564621 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgb6g\" (UniqueName: \"kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.668975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.669053 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.669106 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgb6g\" (UniqueName: \"kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.670539 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.670927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.714912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgb6g\" (UniqueName: \"kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g\") pod \"dnsmasq-dns-666b6646f7-n4hbr\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.742563 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.776508 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.778743 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.795582 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.796306 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.873661 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.875218 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.875295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgkh\" (UniqueName: \"kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.976807 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgkh\" (UniqueName: \"kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.976902 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.976968 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.977958 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:24 crc kubenswrapper[4865]: I0216 23:02:24.984141 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.013021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgkh\" (UniqueName: \"kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh\") pod \"dnsmasq-dns-57d769cc4f-vkxrv\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.099140 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.356185 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.443671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" event={"ID":"1a65950b-74f6-4519-a835-53c4a1ea0189","Type":"ContainerStarted","Data":"161afa990393fbfb30ee310150c218fccad47855ea899a105b1536adb318c242"} Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.610506 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.612057 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617518 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617552 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617635 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617685 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617522 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.617903 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pdtkv" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.618110 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.621118 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.649228 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:02:25 crc kubenswrapper[4865]: W0216 23:02:25.666833 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccecb94b_ea2b_453b_8376_5dff86492ee5.slice/crio-7682ebeca7bbd323e1c5509a18ecbfdc81babb5e5439d19868d8c59d85b8b2d3 WatchSource:0}: Error finding container 7682ebeca7bbd323e1c5509a18ecbfdc81babb5e5439d19868d8c59d85b8b2d3: Status 404 returned error can't find the container with id 7682ebeca7bbd323e1c5509a18ecbfdc81babb5e5439d19868d8c59d85b8b2d3 Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692052 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692194 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692221 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692342 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692365 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692392 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692420 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.692457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2qj\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794002 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794078 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794108 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794188 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2qj\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794268 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794314 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794378 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.794401 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.795255 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.795570 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.796009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.796346 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.796790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.802561 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.803415 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.804602 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.804713 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.806524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.818744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.823395 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2qj\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj\") pod \"rabbitmq-server-0\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.939366 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.940765 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.950843 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.963082 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.963616 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.963860 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.963972 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.966335 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.966513 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 23:02:25 crc kubenswrapper[4865]: I0216 23:02:25.972636 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fhvgp" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999803 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999867 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999898 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999937 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999962 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:25.999994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnx75\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.000032 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.000055 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.004881 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103546 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103722 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103825 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnx75\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103931 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.103975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.104787 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.105081 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.108529 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.111126 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.111526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.119685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.127820 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.129857 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.130662 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.131776 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.140917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnx75\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.170799 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.276815 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.463606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" event={"ID":"ccecb94b-ea2b-453b-8376-5dff86492ee5","Type":"ContainerStarted","Data":"7682ebeca7bbd323e1c5509a18ecbfdc81babb5e5439d19868d8c59d85b8b2d3"} Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.641810 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:02:26 crc kubenswrapper[4865]: I0216 23:02:26.981027 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.168487 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.176013 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.185147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.185842 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.186503 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-slc95" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.219220 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.220231 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.229214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmh96\" (UniqueName: \"kubernetes.io/projected/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kube-api-access-mmh96\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.229619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.229737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.229833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.230351 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.230461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.231185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.231291 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.241851 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332823 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332863 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332923 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.332979 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmh96\" (UniqueName: \"kubernetes.io/projected/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kube-api-access-mmh96\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.333019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.334317 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.334418 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.335058 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.338227 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.347946 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba121bd-0fd3-46b5-b719-f113e7afc99c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.353230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.359579 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba121bd-0fd3-46b5-b719-f113e7afc99c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.375840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmh96\" (UniqueName: \"kubernetes.io/projected/0ba121bd-0fd3-46b5-b719-f113e7afc99c-kube-api-access-mmh96\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.418741 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"0ba121bd-0fd3-46b5-b719-f113e7afc99c\") " pod="openstack/openstack-galera-0" Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.515962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerStarted","Data":"bc3bb52d26458b911c9b9a25f144d0360822975086eba5b632e71cc8177cb0a0"} Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.521886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerStarted","Data":"5a0449eb74c6cb9422845d2024cb1f5acb588140c0ddaed9f254d0312183c927"} Feb 16 23:02:27 crc kubenswrapper[4865]: I0216 23:02:27.524302 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.226320 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 23:02:28 crc kubenswrapper[4865]: W0216 23:02:28.236715 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba121bd_0fd3_46b5_b719_f113e7afc99c.slice/crio-8d07eb223b0f6b0a4805edeccbaf7caf30f723e093e60078ab17f6b60e7a3335 WatchSource:0}: Error finding container 8d07eb223b0f6b0a4805edeccbaf7caf30f723e093e60078ab17f6b60e7a3335: Status 404 returned error can't find the container with id 8d07eb223b0f6b0a4805edeccbaf7caf30f723e093e60078ab17f6b60e7a3335 Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.378183 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.379979 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.387074 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.387233 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.387790 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fspdh" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.388013 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.388297 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.495957 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.497360 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.500612 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tp9gr" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.500687 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.503205 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.520127 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.543788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0ba121bd-0fd3-46b5-b719-f113e7afc99c","Type":"ContainerStarted","Data":"8d07eb223b0f6b0a4805edeccbaf7caf30f723e093e60078ab17f6b60e7a3335"} Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.582574 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.582657 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.582687 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.582753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.582843 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.583710 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.583799 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.584061 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98sm\" (UniqueName: \"kubernetes.io/projected/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kube-api-access-t98sm\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-config-data\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685926 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kolla-config\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.685986 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98sm\" (UniqueName: \"kubernetes.io/projected/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kube-api-access-t98sm\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686036 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686124 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kube-api-access-jqtck\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686205 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.686981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.688123 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.688709 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.696583 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.696967 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.706819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.715099 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.763402 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.769830 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98sm\" (UniqueName: \"kubernetes.io/projected/59ff0541-9e7b-4f6e-8dbb-af16f656abeb-kube-api-access-t98sm\") pod \"openstack-cell1-galera-0\" (UID: \"59ff0541-9e7b-4f6e-8dbb-af16f656abeb\") " pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.791929 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kube-api-access-jqtck\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.792423 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-config-data\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.792451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.792508 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kolla-config\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.792529 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.793778 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-config-data\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.799196 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kolla-config\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.808184 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.808843 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5568f4b1-9ca1-4de9-9355-ffc7b0281375-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.829931 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtck\" (UniqueName: \"kubernetes.io/projected/5568f4b1-9ca1-4de9-9355-ffc7b0281375-kube-api-access-jqtck\") pod \"memcached-0\" (UID: \"5568f4b1-9ca1-4de9-9355-ffc7b0281375\") " pod="openstack/memcached-0" Feb 16 23:02:28 crc kubenswrapper[4865]: I0216 23:02:28.843306 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 23:02:29 crc kubenswrapper[4865]: I0216 23:02:29.019958 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.079494 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.084514 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.087426 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-p9fz2" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.091238 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.261618 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzslq\" (UniqueName: \"kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq\") pod \"kube-state-metrics-0\" (UID: \"abde8904-7323-4a9a-bad8-bc4993889ea7\") " pod="openstack/kube-state-metrics-0" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.379739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzslq\" (UniqueName: \"kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq\") pod \"kube-state-metrics-0\" (UID: \"abde8904-7323-4a9a-bad8-bc4993889ea7\") " pod="openstack/kube-state-metrics-0" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.405840 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzslq\" (UniqueName: \"kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq\") pod \"kube-state-metrics-0\" (UID: \"abde8904-7323-4a9a-bad8-bc4993889ea7\") " pod="openstack/kube-state-metrics-0" Feb 16 23:02:31 crc kubenswrapper[4865]: I0216 23:02:31.437758 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.042046 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-plt5q"] Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.044482 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.054942 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.055405 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.055546 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xl7kf" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.062399 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-plt5q"] Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.087740 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vmd6x"] Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.100557 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vmd6x"] Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.100760 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136661 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-ovn-controller-tls-certs\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136740 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-run\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136780 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-log\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gwq\" (UniqueName: \"kubernetes.io/projected/395c2af4-48dc-44d3-bb74-ef2b3e024c62-kube-api-access-j2gwq\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c2af4-48dc-44d3-bb74-ef2b3e024c62-scripts\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136899 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136931 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5xw\" (UniqueName: \"kubernetes.io/projected/abf5edf2-8442-4aca-b35b-051b9f366b9a-kube-api-access-qc5xw\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-combined-ca-bundle\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136978 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abf5edf2-8442-4aca-b35b-051b9f366b9a-scripts\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.136994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-log-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.137011 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-etc-ovs\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.137034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-lib\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238584 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-log\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c2af4-48dc-44d3-bb74-ef2b3e024c62-scripts\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238662 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2gwq\" (UniqueName: \"kubernetes.io/projected/395c2af4-48dc-44d3-bb74-ef2b3e024c62-kube-api-access-j2gwq\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238755 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5xw\" (UniqueName: \"kubernetes.io/projected/abf5edf2-8442-4aca-b35b-051b9f366b9a-kube-api-access-qc5xw\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238780 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-combined-ca-bundle\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238810 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abf5edf2-8442-4aca-b35b-051b9f366b9a-scripts\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-log-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-etc-ovs\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238915 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-lib\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238945 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-ovn-controller-tls-certs\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.238970 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-run\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.241535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/395c2af4-48dc-44d3-bb74-ef2b3e024c62-scripts\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.242322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-run\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.242563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-log-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.243461 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.244003 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abf5edf2-8442-4aca-b35b-051b9f366b9a-scripts\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.244441 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-log\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.244466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-etc-ovs\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.244612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/395c2af4-48dc-44d3-bb74-ef2b3e024c62-var-lib\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.244735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abf5edf2-8442-4aca-b35b-051b9f366b9a-var-run-ovn\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.256220 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-combined-ca-bundle\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.256897 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abf5edf2-8442-4aca-b35b-051b9f366b9a-ovn-controller-tls-certs\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.270970 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5xw\" (UniqueName: \"kubernetes.io/projected/abf5edf2-8442-4aca-b35b-051b9f366b9a-kube-api-access-qc5xw\") pod \"ovn-controller-plt5q\" (UID: \"abf5edf2-8442-4aca-b35b-051b9f366b9a\") " pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.278939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2gwq\" (UniqueName: \"kubernetes.io/projected/395c2af4-48dc-44d3-bb74-ef2b3e024c62-kube-api-access-j2gwq\") pod \"ovn-controller-ovs-vmd6x\" (UID: \"395c2af4-48dc-44d3-bb74-ef2b3e024c62\") " pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.374732 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q" Feb 16 23:02:34 crc kubenswrapper[4865]: I0216 23:02:34.426629 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.018774 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.020518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.022337 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.022615 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wv4m6" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.023726 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.025008 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.028239 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.034970 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051448 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051562 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051602 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-config\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051630 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051658 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051711 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.051745 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-648gs\" (UniqueName: \"kubernetes.io/projected/99094c44-3d04-4263-a6b7-efc49f5e0fa2-kube-api-access-648gs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153762 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-config\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153830 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153854 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.153875 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-648gs\" (UniqueName: \"kubernetes.io/projected/99094c44-3d04-4263-a6b7-efc49f5e0fa2-kube-api-access-648gs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.157442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.157848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99094c44-3d04-4263-a6b7-efc49f5e0fa2-config\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.158383 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.158439 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.161173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.161836 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.163489 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/99094c44-3d04-4263-a6b7-efc49f5e0fa2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.179130 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-648gs\" (UniqueName: \"kubernetes.io/projected/99094c44-3d04-4263-a6b7-efc49f5e0fa2-kube-api-access-648gs\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.186051 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"99094c44-3d04-4263-a6b7-efc49f5e0fa2\") " pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:35 crc kubenswrapper[4865]: I0216 23:02:35.384870 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.031684 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.033783 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.037536 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xkhq6" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.037596 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.037624 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.037533 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.051548 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.205866 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.206156 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.206618 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.206834 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.206977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.207093 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.207188 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.207322 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdf99\" (UniqueName: \"kubernetes.io/projected/ae2d74d5-cebc-4243-a288-d6d901192de7-kube-api-access-jdf99\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309388 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309477 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309540 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309588 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdf99\" (UniqueName: \"kubernetes.io/projected/ae2d74d5-cebc-4243-a288-d6d901192de7-kube-api-access-jdf99\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.309634 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.310108 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.311838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.316817 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2d74d5-cebc-4243-a288-d6d901192de7-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.317044 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.317615 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.318555 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.341819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2d74d5-cebc-4243-a288-d6d901192de7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.343512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.355523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdf99\" (UniqueName: \"kubernetes.io/projected/ae2d74d5-cebc-4243-a288-d6d901192de7-kube-api-access-jdf99\") pod \"ovsdbserver-sb-0\" (UID: \"ae2d74d5-cebc-4243-a288-d6d901192de7\") " pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:38 crc kubenswrapper[4865]: I0216 23:02:38.356195 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.169566 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.178110 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzd85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-sj5b5_openstack(46f8a691-42b2-47f6-9bf1-034f21f8e91c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.180578 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" podUID="46f8a691-42b2-47f6-9bf1-034f21f8e91c" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.199438 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.199734 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blgkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vkxrv_openstack(ccecb94b-ea2b-453b-8376-5dff86492ee5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.200919 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.232974 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.233244 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-n4hbr_openstack(1a65950b-74f6-4519-a835-53c4a1ea0189): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.235461 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.727168 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" Feb 16 23:02:45 crc kubenswrapper[4865]: E0216 23:02:45.731606 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" Feb 16 23:02:47 crc kubenswrapper[4865]: E0216 23:02:47.815084 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 23:02:47 crc kubenswrapper[4865]: E0216 23:02:47.816147 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl4kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mgvsj_openstack(fba4661a-c5f3-4822-ad11-3b755cbdac28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:02:47 crc kubenswrapper[4865]: E0216 23:02:47.817306 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" podUID="fba4661a-c5f3-4822-ad11-3b755cbdac28" Feb 16 23:02:47 crc kubenswrapper[4865]: I0216 23:02:47.942019 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.064149 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config\") pod \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.064321 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc\") pod \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.064457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzd85\" (UniqueName: \"kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85\") pod \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\" (UID: \"46f8a691-42b2-47f6-9bf1-034f21f8e91c\") " Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.066567 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config" (OuterVolumeSpecName: "config") pod "46f8a691-42b2-47f6-9bf1-034f21f8e91c" (UID: "46f8a691-42b2-47f6-9bf1-034f21f8e91c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.066594 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46f8a691-42b2-47f6-9bf1-034f21f8e91c" (UID: "46f8a691-42b2-47f6-9bf1-034f21f8e91c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.076891 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85" (OuterVolumeSpecName: "kube-api-access-rzd85") pod "46f8a691-42b2-47f6-9bf1-034f21f8e91c" (UID: "46f8a691-42b2-47f6-9bf1-034f21f8e91c"). InnerVolumeSpecName "kube-api-access-rzd85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.168131 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzd85\" (UniqueName: \"kubernetes.io/projected/46f8a691-42b2-47f6-9bf1-034f21f8e91c-kube-api-access-rzd85\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.168183 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.168198 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f8a691-42b2-47f6-9bf1-034f21f8e91c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.467989 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.509839 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-plt5q"] Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.518648 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.607335 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 23:02:48 crc kubenswrapper[4865]: W0216 23:02:48.619987 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99094c44_3d04_4263_a6b7_efc49f5e0fa2.slice/crio-99d41e867623be19da77cdbd55e4101c66f514e2a16c764fe4f31da63917d855 WatchSource:0}: Error finding container 99d41e867623be19da77cdbd55e4101c66f514e2a16c764fe4f31da63917d855: Status 404 returned error can't find the container with id 99d41e867623be19da77cdbd55e4101c66f514e2a16c764fe4f31da63917d855 Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.707834 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:02:48 crc kubenswrapper[4865]: W0216 23:02:48.708563 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabde8904_7323_4a9a_bad8_bc4993889ea7.slice/crio-58d5c585402a0530784b2979e9804e849e88768f50a6cea174293592b84b466a WatchSource:0}: Error finding container 58d5c585402a0530784b2979e9804e849e88768f50a6cea174293592b84b466a: Status 404 returned error can't find the container with id 58d5c585402a0530784b2979e9804e849e88768f50a6cea174293592b84b466a Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.758668 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.758688 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sj5b5" event={"ID":"46f8a691-42b2-47f6-9bf1-034f21f8e91c","Type":"ContainerDied","Data":"1c00a0039abd34dbcdabb6ac412944d44e77957bb22f02e066df1715636acefe"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.761425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5568f4b1-9ca1-4de9-9355-ffc7b0281375","Type":"ContainerStarted","Data":"1716d96bbb2d55c6f7957f0eef129c82d6395dc23f728610e17794561c7f4583"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.763453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0ba121bd-0fd3-46b5-b719-f113e7afc99c","Type":"ContainerStarted","Data":"d42b7b57d8dfda8b9c7f27d1b87da15cfeb54b9ed6aeb45fb332fb0403d01e60"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.767697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"59ff0541-9e7b-4f6e-8dbb-af16f656abeb","Type":"ContainerStarted","Data":"7237799c4b164a98ba12f0770d9317c83a15a036b741e66f004af6dabda14ea5"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.768970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q" event={"ID":"abf5edf2-8442-4aca-b35b-051b9f366b9a","Type":"ContainerStarted","Data":"05a9a86404a6f28d503caa5e09e788a8c72ef7add85602b5fee65ae0606f474e"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.773016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"99094c44-3d04-4263-a6b7-efc49f5e0fa2","Type":"ContainerStarted","Data":"99d41e867623be19da77cdbd55e4101c66f514e2a16c764fe4f31da63917d855"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.774481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abde8904-7323-4a9a-bad8-bc4993889ea7","Type":"ContainerStarted","Data":"58d5c585402a0530784b2979e9804e849e88768f50a6cea174293592b84b466a"} Feb 16 23:02:48 crc kubenswrapper[4865]: I0216 23:02:48.816763 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 23:02:48 crc kubenswrapper[4865]: W0216 23:02:48.870956 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae2d74d5_cebc_4243_a288_d6d901192de7.slice/crio-8c4005b6707eae97acf2c7c3327fecc287647f0cef3e2a4ba830193dc2eaf696 WatchSource:0}: Error finding container 8c4005b6707eae97acf2c7c3327fecc287647f0cef3e2a4ba830193dc2eaf696: Status 404 returned error can't find the container with id 8c4005b6707eae97acf2c7c3327fecc287647f0cef3e2a4ba830193dc2eaf696 Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.322791 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.347614 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.377542 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sj5b5"] Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.398738 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config\") pod \"fba4661a-c5f3-4822-ad11-3b755cbdac28\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.398932 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl4kp\" (UniqueName: \"kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp\") pod \"fba4661a-c5f3-4822-ad11-3b755cbdac28\" (UID: \"fba4661a-c5f3-4822-ad11-3b755cbdac28\") " Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.399564 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config" (OuterVolumeSpecName: "config") pod "fba4661a-c5f3-4822-ad11-3b755cbdac28" (UID: "fba4661a-c5f3-4822-ad11-3b755cbdac28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.400829 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4661a-c5f3-4822-ad11-3b755cbdac28-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.468817 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp" (OuterVolumeSpecName: "kube-api-access-sl4kp") pod "fba4661a-c5f3-4822-ad11-3b755cbdac28" (UID: "fba4661a-c5f3-4822-ad11-3b755cbdac28"). InnerVolumeSpecName "kube-api-access-sl4kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.502451 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl4kp\" (UniqueName: \"kubernetes.io/projected/fba4661a-c5f3-4822-ad11-3b755cbdac28-kube-api-access-sl4kp\") on node \"crc\" DevicePath \"\"" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.641028 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vmd6x"] Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.785545 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerStarted","Data":"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.788234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerStarted","Data":"ce23f91df3b645e3c619579ce46fec34711e865e6983a7989c7f9f247f88c0f1"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.794534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"59ff0541-9e7b-4f6e-8dbb-af16f656abeb","Type":"ContainerStarted","Data":"57a518e71ddbe36ee58f309ce116499d2e14bdfea73c12905ed3136f9b32d83d"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.799054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae2d74d5-cebc-4243-a288-d6d901192de7","Type":"ContainerStarted","Data":"8c4005b6707eae97acf2c7c3327fecc287647f0cef3e2a4ba830193dc2eaf696"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.802938 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.802937 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mgvsj" event={"ID":"fba4661a-c5f3-4822-ad11-3b755cbdac28","Type":"ContainerDied","Data":"a4852e91b0004bd24c60dabff7368c1c2fe57ebe36fa474305e42de5e45266f2"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.821364 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vmd6x" event={"ID":"395c2af4-48dc-44d3-bb74-ef2b3e024c62","Type":"ContainerStarted","Data":"f6f2da040b44ccb39a13ff014fd07b1943dcd0cba4b8918f67407048d51063cd"} Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.933346 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:49 crc kubenswrapper[4865]: I0216 23:02:49.933406 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mgvsj"] Feb 16 23:02:50 crc kubenswrapper[4865]: I0216 23:02:50.436743 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f8a691-42b2-47f6-9bf1-034f21f8e91c" path="/var/lib/kubelet/pods/46f8a691-42b2-47f6-9bf1-034f21f8e91c/volumes" Feb 16 23:02:50 crc kubenswrapper[4865]: I0216 23:02:50.437155 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba4661a-c5f3-4822-ad11-3b755cbdac28" path="/var/lib/kubelet/pods/fba4661a-c5f3-4822-ad11-3b755cbdac28/volumes" Feb 16 23:02:52 crc kubenswrapper[4865]: I0216 23:02:52.852562 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ba121bd-0fd3-46b5-b719-f113e7afc99c" containerID="d42b7b57d8dfda8b9c7f27d1b87da15cfeb54b9ed6aeb45fb332fb0403d01e60" exitCode=0 Feb 16 23:02:52 crc kubenswrapper[4865]: I0216 23:02:52.852769 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0ba121bd-0fd3-46b5-b719-f113e7afc99c","Type":"ContainerDied","Data":"d42b7b57d8dfda8b9c7f27d1b87da15cfeb54b9ed6aeb45fb332fb0403d01e60"} Feb 16 23:02:52 crc kubenswrapper[4865]: I0216 23:02:52.858688 4865 generic.go:334] "Generic (PLEG): container finished" podID="59ff0541-9e7b-4f6e-8dbb-af16f656abeb" containerID="57a518e71ddbe36ee58f309ce116499d2e14bdfea73c12905ed3136f9b32d83d" exitCode=0 Feb 16 23:02:52 crc kubenswrapper[4865]: I0216 23:02:52.858744 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"59ff0541-9e7b-4f6e-8dbb-af16f656abeb","Type":"ContainerDied","Data":"57a518e71ddbe36ee58f309ce116499d2e14bdfea73c12905ed3136f9b32d83d"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.901143 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abde8904-7323-4a9a-bad8-bc4993889ea7","Type":"ContainerStarted","Data":"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.901775 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.903968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae2d74d5-cebc-4243-a288-d6d901192de7","Type":"ContainerStarted","Data":"a5325e4c5e58f6da5bca9e2817bd844e6694d8848e57fbf6c5ca4be5d25432f3"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.906068 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5568f4b1-9ca1-4de9-9355-ffc7b0281375","Type":"ContainerStarted","Data":"faee73087a1ac934755e5f1aea28a33f4efc703c08cea85e7670db22767c9558"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.906222 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.909655 4865 generic.go:334] "Generic (PLEG): container finished" podID="395c2af4-48dc-44d3-bb74-ef2b3e024c62" containerID="39a781955a1f7be0b7aea378082858c730e5077c8b1c636d25a6fa7e6d4527d4" exitCode=0 Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.909723 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vmd6x" event={"ID":"395c2af4-48dc-44d3-bb74-ef2b3e024c62","Type":"ContainerDied","Data":"39a781955a1f7be0b7aea378082858c730e5077c8b1c636d25a6fa7e6d4527d4"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.912854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0ba121bd-0fd3-46b5-b719-f113e7afc99c","Type":"ContainerStarted","Data":"d58648d33ca8ba86d86c6f6ea10c7afff0989f66c24b256e7f973a0bd950b36d"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.918249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"59ff0541-9e7b-4f6e-8dbb-af16f656abeb","Type":"ContainerStarted","Data":"589de41e65693bb160db9a1571c3ce11185f96ad1fdc56de6ca312f8fab9580d"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.918829 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.797695841 podStartE2EDuration="24.918801549s" podCreationTimestamp="2026-02-16 23:02:31 +0000 UTC" firstStartedPulling="2026-02-16 23:02:48.714187953 +0000 UTC m=+1009.037894914" lastFinishedPulling="2026-02-16 23:02:54.835293661 +0000 UTC m=+1015.159000622" observedRunningTime="2026-02-16 23:02:55.917015048 +0000 UTC m=+1016.240722049" watchObservedRunningTime="2026-02-16 23:02:55.918801549 +0000 UTC m=+1016.242508550" Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.921152 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q" event={"ID":"abf5edf2-8442-4aca-b35b-051b9f366b9a","Type":"ContainerStarted","Data":"f8ae384d72082f5ac8ddc61d0ca7ec82dd94c363ac287ab47e132bf5382c879f"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.921771 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-plt5q" Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.927777 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"99094c44-3d04-4263-a6b7-efc49f5e0fa2","Type":"ContainerStarted","Data":"f3e15150b2eae6d9b29f5111bd1dde32ad8c09b43aeb4b35358b43ee854151ad"} Feb 16 23:02:55 crc kubenswrapper[4865]: I0216 23:02:55.971454 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.347750306 podStartE2EDuration="29.971429652s" podCreationTimestamp="2026-02-16 23:02:26 +0000 UTC" firstStartedPulling="2026-02-16 23:02:28.239066981 +0000 UTC m=+988.562773952" lastFinishedPulling="2026-02-16 23:02:47.862746337 +0000 UTC m=+1008.186453298" observedRunningTime="2026-02-16 23:02:55.946697355 +0000 UTC m=+1016.270404316" watchObservedRunningTime="2026-02-16 23:02:55.971429652 +0000 UTC m=+1016.295136613" Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.096353 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.096268672 podStartE2EDuration="29.096268672s" podCreationTimestamp="2026-02-16 23:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:02:56.095776138 +0000 UTC m=+1016.419483099" watchObservedRunningTime="2026-02-16 23:02:56.096268672 +0000 UTC m=+1016.419975633" Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.097199 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.609723135 podStartE2EDuration="28.097194708s" podCreationTimestamp="2026-02-16 23:02:28 +0000 UTC" firstStartedPulling="2026-02-16 23:02:48.470560784 +0000 UTC m=+1008.794267755" lastFinishedPulling="2026-02-16 23:02:53.958032357 +0000 UTC m=+1014.281739328" observedRunningTime="2026-02-16 23:02:56.010418642 +0000 UTC m=+1016.334125603" watchObservedRunningTime="2026-02-16 23:02:56.097194708 +0000 UTC m=+1016.420901669" Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.137827 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-plt5q" podStartSLOduration=16.587600031 podStartE2EDuration="22.137807213s" podCreationTimestamp="2026-02-16 23:02:34 +0000 UTC" firstStartedPulling="2026-02-16 23:02:48.514471602 +0000 UTC m=+1008.838178563" lastFinishedPulling="2026-02-16 23:02:54.064678774 +0000 UTC m=+1014.388385745" observedRunningTime="2026-02-16 23:02:56.129078467 +0000 UTC m=+1016.452785438" watchObservedRunningTime="2026-02-16 23:02:56.137807213 +0000 UTC m=+1016.461514174" Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.941208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vmd6x" event={"ID":"395c2af4-48dc-44d3-bb74-ef2b3e024c62","Type":"ContainerStarted","Data":"d0737a439bb4c978984f0c7764f80a7ccd76f767500a7dabc5d100b612f2c303"} Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.941715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vmd6x" event={"ID":"395c2af4-48dc-44d3-bb74-ef2b3e024c62","Type":"ContainerStarted","Data":"aa43c8962e63daa10c64f88302a33d798415a39ae73c77298887a4cdbd632789"} Feb 16 23:02:56 crc kubenswrapper[4865]: I0216 23:02:56.963482 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vmd6x" podStartSLOduration=17.965564381 podStartE2EDuration="22.963464022s" podCreationTimestamp="2026-02-16 23:02:34 +0000 UTC" firstStartedPulling="2026-02-16 23:02:49.669564958 +0000 UTC m=+1009.993271919" lastFinishedPulling="2026-02-16 23:02:54.667464599 +0000 UTC m=+1014.991171560" observedRunningTime="2026-02-16 23:02:56.96124852 +0000 UTC m=+1017.284955491" watchObservedRunningTime="2026-02-16 23:02:56.963464022 +0000 UTC m=+1017.287170993" Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.524698 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.525210 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.954972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"99094c44-3d04-4263-a6b7-efc49f5e0fa2","Type":"ContainerStarted","Data":"8c0bf1ec451ad195491c6960793bdf112d5f53906460582f6b140d802c857d1d"} Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.958798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae2d74d5-cebc-4243-a288-d6d901192de7","Type":"ContainerStarted","Data":"7eef9324e2fa81683afbc900d4cd52ca3bf329f412d440ea4715334f3dd5a03c"} Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.959168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.959206 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:02:57 crc kubenswrapper[4865]: I0216 23:02:57.996427 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.536085826 podStartE2EDuration="24.996397615s" podCreationTimestamp="2026-02-16 23:02:33 +0000 UTC" firstStartedPulling="2026-02-16 23:02:48.623469805 +0000 UTC m=+1008.947176766" lastFinishedPulling="2026-02-16 23:02:57.083781574 +0000 UTC m=+1017.407488555" observedRunningTime="2026-02-16 23:02:57.986851636 +0000 UTC m=+1018.310558657" watchObservedRunningTime="2026-02-16 23:02:57.996397615 +0000 UTC m=+1018.320104616" Feb 16 23:02:58 crc kubenswrapper[4865]: I0216 23:02:58.033997 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.818189339 podStartE2EDuration="21.033959674s" podCreationTimestamp="2026-02-16 23:02:37 +0000 UTC" firstStartedPulling="2026-02-16 23:02:48.875043338 +0000 UTC m=+1009.198750299" lastFinishedPulling="2026-02-16 23:02:57.090813633 +0000 UTC m=+1017.414520634" observedRunningTime="2026-02-16 23:02:58.028624323 +0000 UTC m=+1018.352331294" watchObservedRunningTime="2026-02-16 23:02:58.033959674 +0000 UTC m=+1018.357666675" Feb 16 23:02:58 crc kubenswrapper[4865]: I0216 23:02:58.356691 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.022418 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.022999 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.356728 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.386714 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.416307 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.465932 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.695049 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.818560 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.983629 4865 generic.go:334] "Generic (PLEG): container finished" podID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerID="0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325" exitCode=0 Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.983756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" event={"ID":"ccecb94b-ea2b-453b-8376-5dff86492ee5","Type":"ContainerDied","Data":"0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325"} Feb 16 23:02:59 crc kubenswrapper[4865]: I0216 23:02:59.984101 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.460008 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.746711 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.800631 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.806919 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.810021 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.832330 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cllwp"] Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.845596 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.852169 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.853032 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.859917 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cllwp"] Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.878316 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.878367 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.879044 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5fj\" (UniqueName: \"kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.879142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980626 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkp2r\" (UniqueName: \"kubernetes.io/projected/0934d4bc-f8b7-4fbb-9309-20826e6aa578-kube-api-access-zkp2r\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovn-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980713 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0934d4bc-f8b7-4fbb-9309-20826e6aa578-config\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980757 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980775 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-combined-ca-bundle\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980810 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovs-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980836 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf5fj\" (UniqueName: \"kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980863 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.980892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.981757 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.982342 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.988604 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.996425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" event={"ID":"ccecb94b-ea2b-453b-8376-5dff86492ee5","Type":"ContainerStarted","Data":"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62"} Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.996592 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="dnsmasq-dns" containerID="cri-o://9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62" gracePeriod=10 Feb 16 23:03:00 crc kubenswrapper[4865]: I0216 23:03:00.996860 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.002562 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf5fj\" (UniqueName: \"kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj\") pod \"dnsmasq-dns-7fd796d7df-w8bqw\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.003351 4865 generic.go:334] "Generic (PLEG): container finished" podID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerID="ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9" exitCode=0 Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.003449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" event={"ID":"1a65950b-74f6-4519-a835-53c4a1ea0189","Type":"ContainerDied","Data":"ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9"} Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.017613 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" podStartSLOduration=3.8428255 podStartE2EDuration="37.017596114s" podCreationTimestamp="2026-02-16 23:02:24 +0000 UTC" firstStartedPulling="2026-02-16 23:02:25.668636711 +0000 UTC m=+985.992343672" lastFinishedPulling="2026-02-16 23:02:58.843407325 +0000 UTC m=+1019.167114286" observedRunningTime="2026-02-16 23:03:01.015938267 +0000 UTC m=+1021.339645228" watchObservedRunningTime="2026-02-16 23:03:01.017596114 +0000 UTC m=+1021.341303075" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.042781 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.066514 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.076593 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.077871 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkp2r\" (UniqueName: \"kubernetes.io/projected/0934d4bc-f8b7-4fbb-9309-20826e6aa578-kube-api-access-zkp2r\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083736 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovn-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0934d4bc-f8b7-4fbb-9309-20826e6aa578-config\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-combined-ca-bundle\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovs-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.083895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.085462 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.087149 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovn-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.088085 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0934d4bc-f8b7-4fbb-9309-20826e6aa578-ovs-rundir\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.088532 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0934d4bc-f8b7-4fbb-9309-20826e6aa578-config\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.097251 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.123959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.125412 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0934d4bc-f8b7-4fbb-9309-20826e6aa578-combined-ca-bundle\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.132039 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.136056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkp2r\" (UniqueName: \"kubernetes.io/projected/0934d4bc-f8b7-4fbb-9309-20826e6aa578-kube-api-access-zkp2r\") pod \"ovn-controller-metrics-cllwp\" (UID: \"0934d4bc-f8b7-4fbb-9309-20826e6aa578\") " pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.180330 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cllwp" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.189017 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.189114 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.189178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgqd\" (UniqueName: \"kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.189269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.189369 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.290089 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgqd\" (UniqueName: \"kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.290461 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.290505 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.290540 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.290565 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.291534 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.291998 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.294995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.295619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.326873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgqd\" (UniqueName: \"kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd\") pod \"dnsmasq-dns-86db49b7ff-w2npd\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.366662 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.370715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.378568 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kv44x" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.378851 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.379057 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.382766 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.386691 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393237 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393333 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4hh\" (UniqueName: \"kubernetes.io/projected/485d0f59-bf5e-43d4-b35e-e4a40273a666-kube-api-access-qp4hh\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393389 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393465 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-config\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393488 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-scripts\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.393519 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.444344 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.453603 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.495980 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc\") pod \"ccecb94b-ea2b-453b-8376-5dff86492ee5\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496139 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blgkh\" (UniqueName: \"kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh\") pod \"ccecb94b-ea2b-453b-8376-5dff86492ee5\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496170 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config\") pod \"ccecb94b-ea2b-453b-8376-5dff86492ee5\" (UID: \"ccecb94b-ea2b-453b-8376-5dff86492ee5\") " Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496429 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496492 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-config\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496542 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-scripts\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496598 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.496655 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4hh\" (UniqueName: \"kubernetes.io/projected/485d0f59-bf5e-43d4-b35e-e4a40273a666-kube-api-access-qp4hh\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.497642 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.498023 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-config\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.498410 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/485d0f59-bf5e-43d4-b35e-e4a40273a666-scripts\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.503033 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.505505 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.509492 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh" (OuterVolumeSpecName: "kube-api-access-blgkh") pod "ccecb94b-ea2b-453b-8376-5dff86492ee5" (UID: "ccecb94b-ea2b-453b-8376-5dff86492ee5"). InnerVolumeSpecName "kube-api-access-blgkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.510215 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/485d0f59-bf5e-43d4-b35e-e4a40273a666-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.513947 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4hh\" (UniqueName: \"kubernetes.io/projected/485d0f59-bf5e-43d4-b35e-e4a40273a666-kube-api-access-qp4hh\") pod \"ovn-northd-0\" (UID: \"485d0f59-bf5e-43d4-b35e-e4a40273a666\") " pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.537522 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccecb94b-ea2b-453b-8376-5dff86492ee5" (UID: "ccecb94b-ea2b-453b-8376-5dff86492ee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.539826 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config" (OuterVolumeSpecName: "config") pod "ccecb94b-ea2b-453b-8376-5dff86492ee5" (UID: "ccecb94b-ea2b-453b-8376-5dff86492ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.543872 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.598433 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blgkh\" (UniqueName: \"kubernetes.io/projected/ccecb94b-ea2b-453b-8376-5dff86492ee5-kube-api-access-blgkh\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.598468 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.598478 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccecb94b-ea2b-453b-8376-5dff86492ee5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.696188 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.719610 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.783120 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cllwp"] Feb 16 23:03:01 crc kubenswrapper[4865]: W0216 23:03:01.822746 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0934d4bc_f8b7_4fbb_9309_20826e6aa578.slice/crio-b9095a1195d0835ffbe16b1f62d5033fea10dc41489e4838eaa91b34179039ec WatchSource:0}: Error finding container b9095a1195d0835ffbe16b1f62d5033fea10dc41489e4838eaa91b34179039ec: Status 404 returned error can't find the container with id b9095a1195d0835ffbe16b1f62d5033fea10dc41489e4838eaa91b34179039ec Feb 16 23:03:01 crc kubenswrapper[4865]: I0216 23:03:01.980798 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:02 crc kubenswrapper[4865]: W0216 23:03:02.032354 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8980296a_e6bd_4a93_a4e1_658639922b93.slice/crio-27e98ddb7eed3ab8c858d1453f7530c404e4d126246da01bf4dff9cb2d4b0564 WatchSource:0}: Error finding container 27e98ddb7eed3ab8c858d1453f7530c404e4d126246da01bf4dff9cb2d4b0564: Status 404 returned error can't find the container with id 27e98ddb7eed3ab8c858d1453f7530c404e4d126246da01bf4dff9cb2d4b0564 Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.033033 4865 generic.go:334] "Generic (PLEG): container finished" podID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerID="250d2f085398f0ca0109cca71a34507fe8e3330d79325865b9268233d3232be8" exitCode=0 Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.033092 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" event={"ID":"2512ebf7-5fe8-410c-80bd-2568c4c54572","Type":"ContainerDied","Data":"250d2f085398f0ca0109cca71a34507fe8e3330d79325865b9268233d3232be8"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.033121 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" event={"ID":"2512ebf7-5fe8-410c-80bd-2568c4c54572","Type":"ContainerStarted","Data":"3a5b98a51be1833429e45eb1c9d48baa3da448ed071f5ddbaa11108a86383cc0"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.076608 4865 generic.go:334] "Generic (PLEG): container finished" podID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerID="9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62" exitCode=0 Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.076706 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" event={"ID":"ccecb94b-ea2b-453b-8376-5dff86492ee5","Type":"ContainerDied","Data":"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.076736 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" event={"ID":"ccecb94b-ea2b-453b-8376-5dff86492ee5","Type":"ContainerDied","Data":"7682ebeca7bbd323e1c5509a18ecbfdc81babb5e5439d19868d8c59d85b8b2d3"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.076756 4865 scope.go:117] "RemoveContainer" containerID="9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.076909 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vkxrv" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.140660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" event={"ID":"1a65950b-74f6-4519-a835-53c4a1ea0189","Type":"ContainerStarted","Data":"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.140838 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="dnsmasq-dns" containerID="cri-o://9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa" gracePeriod=10 Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.141103 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.171517 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cllwp" event={"ID":"0934d4bc-f8b7-4fbb-9309-20826e6aa578","Type":"ContainerStarted","Data":"b9095a1195d0835ffbe16b1f62d5033fea10dc41489e4838eaa91b34179039ec"} Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.200834 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" podStartSLOduration=-9223371998.653965 podStartE2EDuration="38.200811283s" podCreationTimestamp="2026-02-16 23:02:24 +0000 UTC" firstStartedPulling="2026-02-16 23:02:25.377523773 +0000 UTC m=+985.701230734" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:02.172748882 +0000 UTC m=+1022.496455843" watchObservedRunningTime="2026-02-16 23:03:02.200811283 +0000 UTC m=+1022.524518244" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.259356 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.311968 4865 scope.go:117] "RemoveContainer" containerID="0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.336404 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.339540 4865 scope.go:117] "RemoveContainer" containerID="9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.340916 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vkxrv"] Feb 16 23:03:02 crc kubenswrapper[4865]: E0216 23:03:02.342217 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62\": container with ID starting with 9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62 not found: ID does not exist" containerID="9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.342252 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62"} err="failed to get container status \"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62\": rpc error: code = NotFound desc = could not find container \"9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62\": container with ID starting with 9faf1609bc7e48aa2f972ef41df985af5f2636fa76b60d819f45da2b49d51d62 not found: ID does not exist" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.342282 4865 scope.go:117] "RemoveContainer" containerID="0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325" Feb 16 23:03:02 crc kubenswrapper[4865]: E0216 23:03:02.344626 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325\": container with ID starting with 0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325 not found: ID does not exist" containerID="0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.344657 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325"} err="failed to get container status \"0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325\": rpc error: code = NotFound desc = could not find container \"0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325\": container with ID starting with 0954c903d54a2e8437374a3f2445f8ae8f3a52e40c9574dc3336950d4a5d8325 not found: ID does not exist" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.423916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" path="/var/lib/kubelet/pods/ccecb94b-ea2b-453b-8376-5dff86492ee5/volumes" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.507911 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.638104 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc\") pod \"1a65950b-74f6-4519-a835-53c4a1ea0189\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.638244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgb6g\" (UniqueName: \"kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g\") pod \"1a65950b-74f6-4519-a835-53c4a1ea0189\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.638376 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config\") pod \"1a65950b-74f6-4519-a835-53c4a1ea0189\" (UID: \"1a65950b-74f6-4519-a835-53c4a1ea0189\") " Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.644609 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g" (OuterVolumeSpecName: "kube-api-access-zgb6g") pod "1a65950b-74f6-4519-a835-53c4a1ea0189" (UID: "1a65950b-74f6-4519-a835-53c4a1ea0189"). InnerVolumeSpecName "kube-api-access-zgb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.681001 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config" (OuterVolumeSpecName: "config") pod "1a65950b-74f6-4519-a835-53c4a1ea0189" (UID: "1a65950b-74f6-4519-a835-53c4a1ea0189"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.694188 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a65950b-74f6-4519-a835-53c4a1ea0189" (UID: "1a65950b-74f6-4519-a835-53c4a1ea0189"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.739948 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.739994 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgb6g\" (UniqueName: \"kubernetes.io/projected/1a65950b-74f6-4519-a835-53c4a1ea0189-kube-api-access-zgb6g\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:02 crc kubenswrapper[4865]: I0216 23:03:02.740005 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a65950b-74f6-4519-a835-53c4a1ea0189-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.169746 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.208154 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"485d0f59-bf5e-43d4-b35e-e4a40273a666","Type":"ContainerStarted","Data":"e42bdf78ba4100203eef76152eec75e9c533b06b0e52c89f0147f4e58b926179"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.210002 4865 generic.go:334] "Generic (PLEG): container finished" podID="8980296a-e6bd-4a93-a4e1-658639922b93" containerID="7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e" exitCode=0 Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.210104 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" event={"ID":"8980296a-e6bd-4a93-a4e1-658639922b93","Type":"ContainerDied","Data":"7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.210136 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" event={"ID":"8980296a-e6bd-4a93-a4e1-658639922b93","Type":"ContainerStarted","Data":"27e98ddb7eed3ab8c858d1453f7530c404e4d126246da01bf4dff9cb2d4b0564"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.212337 4865 generic.go:334] "Generic (PLEG): container finished" podID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerID="9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa" exitCode=0 Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.212388 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.212441 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" event={"ID":"1a65950b-74f6-4519-a835-53c4a1ea0189","Type":"ContainerDied","Data":"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.212491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-n4hbr" event={"ID":"1a65950b-74f6-4519-a835-53c4a1ea0189","Type":"ContainerDied","Data":"161afa990393fbfb30ee310150c218fccad47855ea899a105b1536adb318c242"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.212525 4865 scope.go:117] "RemoveContainer" containerID="9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.214860 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cllwp" event={"ID":"0934d4bc-f8b7-4fbb-9309-20826e6aa578","Type":"ContainerStarted","Data":"ecab147dddcd865160a947d940f016810ccc4a58f3fe19025b2f5f0f0b549365"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.217937 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" event={"ID":"2512ebf7-5fe8-410c-80bd-2568c4c54572","Type":"ContainerStarted","Data":"ee64484ff28a5c729602cd4a9f740fc4d9dfa390c637a980edec168ee974f85f"} Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.218092 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.234238 4865 scope.go:117] "RemoveContainer" containerID="ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.274647 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cllwp" podStartSLOduration=3.274625058 podStartE2EDuration="3.274625058s" podCreationTimestamp="2026-02-16 23:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:03.271997474 +0000 UTC m=+1023.595704435" watchObservedRunningTime="2026-02-16 23:03:03.274625058 +0000 UTC m=+1023.598332019" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.299023 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" podStartSLOduration=3.299004156 podStartE2EDuration="3.299004156s" podCreationTimestamp="2026-02-16 23:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:03.298667666 +0000 UTC m=+1023.622374627" watchObservedRunningTime="2026-02-16 23:03:03.299004156 +0000 UTC m=+1023.622711117" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.334715 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.408188 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.420269 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-n4hbr"] Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.630086 4865 scope.go:117] "RemoveContainer" containerID="9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa" Feb 16 23:03:03 crc kubenswrapper[4865]: E0216 23:03:03.631608 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa\": container with ID starting with 9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa not found: ID does not exist" containerID="9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.631666 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa"} err="failed to get container status \"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa\": rpc error: code = NotFound desc = could not find container \"9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa\": container with ID starting with 9fd83e7edb692aba21eae8fa708fda67046da52c6c8f83f19915f0805c0f1caa not found: ID does not exist" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.631705 4865 scope.go:117] "RemoveContainer" containerID="ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9" Feb 16 23:03:03 crc kubenswrapper[4865]: E0216 23:03:03.632319 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9\": container with ID starting with ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9 not found: ID does not exist" containerID="ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.632427 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9"} err="failed to get container status \"ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9\": rpc error: code = NotFound desc = could not find container \"ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9\": container with ID starting with ffb067497ee5e9f767ffa0bb715975c378ca4d9383f6d56af4e6d1379ba199a9 not found: ID does not exist" Feb 16 23:03:03 crc kubenswrapper[4865]: I0216 23:03:03.845524 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.157268 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bvlkv"] Feb 16 23:03:04 crc kubenswrapper[4865]: E0216 23:03:04.157755 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="init" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.157776 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="init" Feb 16 23:03:04 crc kubenswrapper[4865]: E0216 23:03:04.157828 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.157837 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: E0216 23:03:04.157863 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.157871 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: E0216 23:03:04.157892 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="init" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.157902 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="init" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.158147 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.158172 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccecb94b-ea2b-453b-8376-5dff86492ee5" containerName="dnsmasq-dns" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.158873 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.166713 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c32f-account-create-update-rf5cg"] Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.167988 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.170151 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.172534 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bvlkv"] Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.194700 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c32f-account-create-update-rf5cg"] Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.237870 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" event={"ID":"8980296a-e6bd-4a93-a4e1-658639922b93","Type":"ContainerStarted","Data":"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275"} Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.238147 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.263482 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" podStartSLOduration=3.263464137 podStartE2EDuration="3.263464137s" podCreationTimestamp="2026-02-16 23:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:04.25504679 +0000 UTC m=+1024.578753761" watchObservedRunningTime="2026-02-16 23:03:04.263464137 +0000 UTC m=+1024.587171098" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.263982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"485d0f59-bf5e-43d4-b35e-e4a40273a666","Type":"ContainerStarted","Data":"097e18a1df56976f046611994c759de598002d47d179b707394983220f17f05a"} Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.264014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"485d0f59-bf5e-43d4-b35e-e4a40273a666","Type":"ContainerStarted","Data":"f83fa8e401487872d5979a826ccc8de6d64cd918f7e24c76c11f6a903c935bc7"} Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.264853 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.284359 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.284431 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcb2q\" (UniqueName: \"kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.284466 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.284503 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxc78\" (UniqueName: \"kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.295433 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.959059551 podStartE2EDuration="3.295411358s" podCreationTimestamp="2026-02-16 23:03:01 +0000 UTC" firstStartedPulling="2026-02-16 23:03:02.320801556 +0000 UTC m=+1022.644508517" lastFinishedPulling="2026-02-16 23:03:03.657153353 +0000 UTC m=+1023.980860324" observedRunningTime="2026-02-16 23:03:04.288612166 +0000 UTC m=+1024.612319127" watchObservedRunningTime="2026-02-16 23:03:04.295411358 +0000 UTC m=+1024.619118319" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.386443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.386552 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcb2q\" (UniqueName: \"kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.386617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.386720 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxc78\" (UniqueName: \"kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.389969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.390429 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.409967 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxc78\" (UniqueName: \"kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78\") pod \"glance-c32f-account-create-update-rf5cg\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.410094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcb2q\" (UniqueName: \"kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q\") pod \"glance-db-create-bvlkv\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.424812 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a65950b-74f6-4519-a835-53c4a1ea0189" path="/var/lib/kubelet/pods/1a65950b-74f6-4519-a835-53c4a1ea0189/volumes" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.495166 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:04 crc kubenswrapper[4865]: I0216 23:03:04.506767 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.025922 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c32f-account-create-update-rf5cg"] Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.081527 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bvlkv"] Feb 16 23:03:05 crc kubenswrapper[4865]: W0216 23:03:05.087063 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeffc5df0_2f7c_4d99_bf39_db1aa0c22c24.slice/crio-4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f WatchSource:0}: Error finding container 4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f: Status 404 returned error can't find the container with id 4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.277161 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c32f-account-create-update-rf5cg" event={"ID":"0e4a6413-5a03-450d-8bb2-abf70fdead46","Type":"ContainerStarted","Data":"4d077bd02325e7efde99629418a99e7156ed1893968d793927d1727ee333a69f"} Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.279580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvlkv" event={"ID":"effc5df0-2f7c-4d99-bf39-db1aa0c22c24","Type":"ContainerStarted","Data":"4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f"} Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.976912 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f9dhc"] Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.978526 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.985922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 23:03:05 crc kubenswrapper[4865]: I0216 23:03:05.991886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f9dhc"] Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.122917 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.123353 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vvs\" (UniqueName: \"kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.225064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.225487 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vvs\" (UniqueName: \"kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.225965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.252598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vvs\" (UniqueName: \"kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs\") pod \"root-account-create-update-f9dhc\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.289246 4865 generic.go:334] "Generic (PLEG): container finished" podID="effc5df0-2f7c-4d99-bf39-db1aa0c22c24" containerID="f8a285d945dc4cf31031cf9a42178f0042c375a65b7829b914be4fe5b2fefa5e" exitCode=0 Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.289316 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvlkv" event={"ID":"effc5df0-2f7c-4d99-bf39-db1aa0c22c24","Type":"ContainerDied","Data":"f8a285d945dc4cf31031cf9a42178f0042c375a65b7829b914be4fe5b2fefa5e"} Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.291249 4865 generic.go:334] "Generic (PLEG): container finished" podID="0e4a6413-5a03-450d-8bb2-abf70fdead46" containerID="eddcaedee1dd4f4cf42b4f6c00dcce90f2a9ee23321de00874638723e79f0fc0" exitCode=0 Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.292031 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c32f-account-create-update-rf5cg" event={"ID":"0e4a6413-5a03-450d-8bb2-abf70fdead46","Type":"ContainerDied","Data":"eddcaedee1dd4f4cf42b4f6c00dcce90f2a9ee23321de00874638723e79f0fc0"} Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.308578 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:06 crc kubenswrapper[4865]: I0216 23:03:06.833426 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f9dhc"] Feb 16 23:03:06 crc kubenswrapper[4865]: W0216 23:03:06.844602 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod314370d0_0193_485a_8bed_dd37e1535b65.slice/crio-3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91 WatchSource:0}: Error finding container 3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91: Status 404 returned error can't find the container with id 3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91 Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.308192 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9dhc" event={"ID":"314370d0-0193-485a-8bed-dd37e1535b65","Type":"ContainerStarted","Data":"bdc0f90485d9a58e592b824d00e990259d3bcdc4f55d2a284a090556ce125509"} Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.308796 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9dhc" event={"ID":"314370d0-0193-485a-8bed-dd37e1535b65","Type":"ContainerStarted","Data":"3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91"} Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.338563 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-f9dhc" podStartSLOduration=2.338534284 podStartE2EDuration="2.338534284s" podCreationTimestamp="2026-02-16 23:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:07.326104623 +0000 UTC m=+1027.649811634" watchObservedRunningTime="2026-02-16 23:03:07.338534284 +0000 UTC m=+1027.662241285" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.809076 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.819638 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.962360 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts\") pod \"0e4a6413-5a03-450d-8bb2-abf70fdead46\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.962413 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxc78\" (UniqueName: \"kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78\") pod \"0e4a6413-5a03-450d-8bb2-abf70fdead46\" (UID: \"0e4a6413-5a03-450d-8bb2-abf70fdead46\") " Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.962493 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcb2q\" (UniqueName: \"kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q\") pod \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.962577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts\") pod \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\" (UID: \"effc5df0-2f7c-4d99-bf39-db1aa0c22c24\") " Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.962946 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e4a6413-5a03-450d-8bb2-abf70fdead46" (UID: "0e4a6413-5a03-450d-8bb2-abf70fdead46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.963535 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "effc5df0-2f7c-4d99-bf39-db1aa0c22c24" (UID: "effc5df0-2f7c-4d99-bf39-db1aa0c22c24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.970790 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78" (OuterVolumeSpecName: "kube-api-access-rxc78") pod "0e4a6413-5a03-450d-8bb2-abf70fdead46" (UID: "0e4a6413-5a03-450d-8bb2-abf70fdead46"). InnerVolumeSpecName "kube-api-access-rxc78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:07 crc kubenswrapper[4865]: I0216 23:03:07.984418 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q" (OuterVolumeSpecName: "kube-api-access-dcb2q") pod "effc5df0-2f7c-4d99-bf39-db1aa0c22c24" (UID: "effc5df0-2f7c-4d99-bf39-db1aa0c22c24"). InnerVolumeSpecName "kube-api-access-dcb2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.064393 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.064428 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e4a6413-5a03-450d-8bb2-abf70fdead46-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.064438 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxc78\" (UniqueName: \"kubernetes.io/projected/0e4a6413-5a03-450d-8bb2-abf70fdead46-kube-api-access-rxc78\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.064450 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcb2q\" (UniqueName: \"kubernetes.io/projected/effc5df0-2f7c-4d99-bf39-db1aa0c22c24-kube-api-access-dcb2q\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.316421 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c32f-account-create-update-rf5cg" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.316417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c32f-account-create-update-rf5cg" event={"ID":"0e4a6413-5a03-450d-8bb2-abf70fdead46","Type":"ContainerDied","Data":"4d077bd02325e7efde99629418a99e7156ed1893968d793927d1727ee333a69f"} Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.317670 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d077bd02325e7efde99629418a99e7156ed1893968d793927d1727ee333a69f" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.318262 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bvlkv" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.318351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bvlkv" event={"ID":"effc5df0-2f7c-4d99-bf39-db1aa0c22c24","Type":"ContainerDied","Data":"4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f"} Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.318427 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4940157a9cb7157850b4c8589c10d640265efd6847c2615a23df800e3316724f" Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.319671 4865 generic.go:334] "Generic (PLEG): container finished" podID="314370d0-0193-485a-8bed-dd37e1535b65" containerID="bdc0f90485d9a58e592b824d00e990259d3bcdc4f55d2a284a090556ce125509" exitCode=0 Feb 16 23:03:08 crc kubenswrapper[4865]: I0216 23:03:08.319711 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9dhc" event={"ID":"314370d0-0193-485a-8bed-dd37e1535b65","Type":"ContainerDied","Data":"bdc0f90485d9a58e592b824d00e990259d3bcdc4f55d2a284a090556ce125509"} Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.306856 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2dd9q"] Feb 16 23:03:09 crc kubenswrapper[4865]: E0216 23:03:09.307820 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4a6413-5a03-450d-8bb2-abf70fdead46" containerName="mariadb-account-create-update" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.307852 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4a6413-5a03-450d-8bb2-abf70fdead46" containerName="mariadb-account-create-update" Feb 16 23:03:09 crc kubenswrapper[4865]: E0216 23:03:09.307890 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effc5df0-2f7c-4d99-bf39-db1aa0c22c24" containerName="mariadb-database-create" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.307899 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="effc5df0-2f7c-4d99-bf39-db1aa0c22c24" containerName="mariadb-database-create" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.308104 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="effc5df0-2f7c-4d99-bf39-db1aa0c22c24" containerName="mariadb-database-create" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.308127 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4a6413-5a03-450d-8bb2-abf70fdead46" containerName="mariadb-account-create-update" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.308794 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.311498 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.314306 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wch75" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.320284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2dd9q"] Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.491565 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl4z\" (UniqueName: \"kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.491776 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.491888 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.491992 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.593518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.593599 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.593662 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.593707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl4z\" (UniqueName: \"kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.602094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.604702 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.609628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.614783 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl4z\" (UniqueName: \"kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z\") pod \"glance-db-sync-2dd9q\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.628848 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.720076 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.903922 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts\") pod \"314370d0-0193-485a-8bed-dd37e1535b65\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.904423 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vvs\" (UniqueName: \"kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs\") pod \"314370d0-0193-485a-8bed-dd37e1535b65\" (UID: \"314370d0-0193-485a-8bed-dd37e1535b65\") " Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.905658 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "314370d0-0193-485a-8bed-dd37e1535b65" (UID: "314370d0-0193-485a-8bed-dd37e1535b65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:09 crc kubenswrapper[4865]: I0216 23:03:09.931188 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs" (OuterVolumeSpecName: "kube-api-access-q7vvs") pod "314370d0-0193-485a-8bed-dd37e1535b65" (UID: "314370d0-0193-485a-8bed-dd37e1535b65"). InnerVolumeSpecName "kube-api-access-q7vvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.006757 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vvs\" (UniqueName: \"kubernetes.io/projected/314370d0-0193-485a-8bed-dd37e1535b65-kube-api-access-q7vvs\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.006794 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/314370d0-0193-485a-8bed-dd37e1535b65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.048090 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mzw96"] Feb 16 23:03:10 crc kubenswrapper[4865]: E0216 23:03:10.048448 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314370d0-0193-485a-8bed-dd37e1535b65" containerName="mariadb-account-create-update" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.048466 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="314370d0-0193-485a-8bed-dd37e1535b65" containerName="mariadb-account-create-update" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.048647 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="314370d0-0193-485a-8bed-dd37e1535b65" containerName="mariadb-account-create-update" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.049177 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.071607 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mzw96"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.084087 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-82d6-account-create-update-2499b"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.085204 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.087373 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.104499 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-82d6-account-create-update-2499b"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.130775 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2dd9q"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.209846 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.210342 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjj6m\" (UniqueName: \"kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.210869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9978\" (UniqueName: \"kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.211106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.246280 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2287x"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.247263 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.261709 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2287x"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.312866 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.312926 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.312965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjj6m\" (UniqueName: \"kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.313026 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9978\" (UniqueName: \"kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.313929 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.313960 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.335689 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjj6m\" (UniqueName: \"kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m\") pod \"keystone-82d6-account-create-update-2499b\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.338244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9978\" (UniqueName: \"kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978\") pod \"keystone-db-create-mzw96\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.344840 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2dd9q" event={"ID":"38c44ae1-9f36-4df8-ba88-443ca78fe47a","Type":"ContainerStarted","Data":"f6cc21af8cd22f5cb2d3e34e4384cd138a674768417468d9b4f7e80123a41285"} Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.348073 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9dhc" event={"ID":"314370d0-0193-485a-8bed-dd37e1535b65","Type":"ContainerDied","Data":"3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91"} Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.348122 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bac01de6172c049b6fa3c19236a7af26e5456ec3d223335a4b1b8e2587c5e91" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.348201 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9dhc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.361535 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-93e7-account-create-update-2v9fc"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.362751 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.364872 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.367941 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.400573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-93e7-account-create-update-2v9fc"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.400947 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.414201 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7699f\" (UniqueName: \"kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.414248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.516685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7699f\" (UniqueName: \"kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.517035 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.517062 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dwr\" (UniqueName: \"kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.517172 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.517848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.540991 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7699f\" (UniqueName: \"kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f\") pod \"placement-db-create-2287x\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.576907 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2287x" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.618776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dwr\" (UniqueName: \"kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.618890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.619691 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.640202 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dwr\" (UniqueName: \"kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr\") pod \"placement-93e7-account-create-update-2v9fc\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.718529 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.823576 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mzw96"] Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.857702 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2287x"] Feb 16 23:03:10 crc kubenswrapper[4865]: W0216 23:03:10.892910 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7cbf3b_ff30_4ffa_a6ce_1df0143bdade.slice/crio-57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7 WatchSource:0}: Error finding container 57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7: Status 404 returned error can't find the container with id 57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7 Feb 16 23:03:10 crc kubenswrapper[4865]: I0216 23:03:10.918451 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-82d6-account-create-update-2499b"] Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.134426 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.224818 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-93e7-account-create-update-2v9fc"] Feb 16 23:03:11 crc kubenswrapper[4865]: W0216 23:03:11.250662 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49da7ade_9fd4_4cb1_a47a_4a07a038a7e8.slice/crio-ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41 WatchSource:0}: Error finding container ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41: Status 404 returned error can't find the container with id ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41 Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.398539 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-82d6-account-create-update-2499b" event={"ID":"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9","Type":"ContainerStarted","Data":"b7e42c485150ac43fbed280d237151298f364036252a06e541f4198b79fdf37f"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.398605 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-82d6-account-create-update-2499b" event={"ID":"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9","Type":"ContainerStarted","Data":"ee6451d2148154f2b3b651e7ecfb754abcbdacdac32ac32b2acf264d74bb3b9b"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.409450 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-93e7-account-create-update-2v9fc" event={"ID":"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8","Type":"ContainerStarted","Data":"ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.454011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mzw96" event={"ID":"b3421511-356b-4a61-ac23-fa0915c8a6df","Type":"ContainerStarted","Data":"4cae800704f62e9c08d6da536231ab752df90fa63f0ca29785323930658b5fa6"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.454077 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mzw96" event={"ID":"b3421511-356b-4a61-ac23-fa0915c8a6df","Type":"ContainerStarted","Data":"7ee16d18a69f64c5b93e72d8fde7ecc7db14bbe51767672f95336c7f38761ccb"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.478593 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-82d6-account-create-update-2499b" podStartSLOduration=1.478559687 podStartE2EDuration="1.478559687s" podCreationTimestamp="2026-02-16 23:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:11.435108702 +0000 UTC m=+1031.758815663" watchObservedRunningTime="2026-02-16 23:03:11.478559687 +0000 UTC m=+1031.802266648" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.523894 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2287x" event={"ID":"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade","Type":"ContainerStarted","Data":"d984368f85c6b1a069862f46133da4addc17253548488e07fb5f6f11a31d3021"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.523950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2287x" event={"ID":"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade","Type":"ContainerStarted","Data":"57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7"} Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.550623 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.577285 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.605128 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-2287x" podStartSLOduration=1.6050922650000001 podStartE2EDuration="1.605092265s" podCreationTimestamp="2026-02-16 23:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:11.568454662 +0000 UTC m=+1031.892161623" watchObservedRunningTime="2026-02-16 23:03:11.605092265 +0000 UTC m=+1031.928799226" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.690429 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.692444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.711950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.716950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgcq\" (UniqueName: \"kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.717071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.717173 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.717318 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.717442 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.822662 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.823021 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.823059 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.823076 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgcq\" (UniqueName: \"kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.823095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.823976 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.824047 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.824390 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.824869 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:11 crc kubenswrapper[4865]: I0216 23:03:11.864181 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgcq\" (UniqueName: \"kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq\") pod \"dnsmasq-dns-698758b865-2c6pd\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.118583 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.448393 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f9dhc"] Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.455200 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f9dhc"] Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.536713 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3421511-356b-4a61-ac23-fa0915c8a6df" containerID="4cae800704f62e9c08d6da536231ab752df90fa63f0ca29785323930658b5fa6" exitCode=0 Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.537140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mzw96" event={"ID":"b3421511-356b-4a61-ac23-fa0915c8a6df","Type":"ContainerDied","Data":"4cae800704f62e9c08d6da536231ab752df90fa63f0ca29785323930658b5fa6"} Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.538648 4865 generic.go:334] "Generic (PLEG): container finished" podID="6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" containerID="d984368f85c6b1a069862f46133da4addc17253548488e07fb5f6f11a31d3021" exitCode=0 Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.538679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2287x" event={"ID":"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade","Type":"ContainerDied","Data":"d984368f85c6b1a069862f46133da4addc17253548488e07fb5f6f11a31d3021"} Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.540843 4865 generic.go:334] "Generic (PLEG): container finished" podID="05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" containerID="b7e42c485150ac43fbed280d237151298f364036252a06e541f4198b79fdf37f" exitCode=0 Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.540898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-82d6-account-create-update-2499b" event={"ID":"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9","Type":"ContainerDied","Data":"b7e42c485150ac43fbed280d237151298f364036252a06e541f4198b79fdf37f"} Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.543495 4865 generic.go:334] "Generic (PLEG): container finished" podID="49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" containerID="d1795669094c5dddb6f15dff9d5695e6bdf3dc959b6848bde525cbd6406b7037" exitCode=0 Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.543725 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="dnsmasq-dns" containerID="cri-o://b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275" gracePeriod=10 Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.544067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-93e7-account-create-update-2v9fc" event={"ID":"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8","Type":"ContainerDied","Data":"d1795669094c5dddb6f15dff9d5695e6bdf3dc959b6848bde525cbd6406b7037"} Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.637380 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:12 crc kubenswrapper[4865]: W0216 23:03:12.680458 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354e44fb_0dd9_4935_b7b6_da13d52fb91c.slice/crio-6f58a2ff101d0d85b8223d4268a045568b756837809f7708e3c1d1a4f9ff662d WatchSource:0}: Error finding container 6f58a2ff101d0d85b8223d4268a045568b756837809f7708e3c1d1a4f9ff662d: Status 404 returned error can't find the container with id 6f58a2ff101d0d85b8223d4268a045568b756837809f7708e3c1d1a4f9ff662d Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.758033 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.765279 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.768337 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.769895 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.770777 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.770963 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6zk9p" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.771615 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941114 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941173 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4p5c\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-kube-api-access-m4p5c\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-lock\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34486574-e35d-4674-a0b3-57d122050e66-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.941406 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-cache\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.963248 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:12 crc kubenswrapper[4865]: I0216 23:03:12.968946 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-cache\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4p5c\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-kube-api-access-m4p5c\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-lock\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.043574 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34486574-e35d-4674-a0b3-57d122050e66-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.044629 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.044666 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.044697 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.044732 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:13.544710383 +0000 UTC m=+1033.868417344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.045441 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-lock\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.045638 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34486574-e35d-4674-a0b3-57d122050e66-cache\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.054019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34486574-e35d-4674-a0b3-57d122050e66-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.061163 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4p5c\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-kube-api-access-m4p5c\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.077386 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145266 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config\") pod \"8980296a-e6bd-4a93-a4e1-658639922b93\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145364 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgqd\" (UniqueName: \"kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd\") pod \"8980296a-e6bd-4a93-a4e1-658639922b93\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145404 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9978\" (UniqueName: \"kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978\") pod \"b3421511-356b-4a61-ac23-fa0915c8a6df\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145463 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb\") pod \"8980296a-e6bd-4a93-a4e1-658639922b93\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145500 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb\") pod \"8980296a-e6bd-4a93-a4e1-658639922b93\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145547 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc\") pod \"8980296a-e6bd-4a93-a4e1-658639922b93\" (UID: \"8980296a-e6bd-4a93-a4e1-658639922b93\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.145587 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts\") pod \"b3421511-356b-4a61-ac23-fa0915c8a6df\" (UID: \"b3421511-356b-4a61-ac23-fa0915c8a6df\") " Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.147101 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3421511-356b-4a61-ac23-fa0915c8a6df" (UID: "b3421511-356b-4a61-ac23-fa0915c8a6df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.150614 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978" (OuterVolumeSpecName: "kube-api-access-t9978") pod "b3421511-356b-4a61-ac23-fa0915c8a6df" (UID: "b3421511-356b-4a61-ac23-fa0915c8a6df"). InnerVolumeSpecName "kube-api-access-t9978". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.151358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd" (OuterVolumeSpecName: "kube-api-access-ghgqd") pod "8980296a-e6bd-4a93-a4e1-658639922b93" (UID: "8980296a-e6bd-4a93-a4e1-658639922b93"). InnerVolumeSpecName "kube-api-access-ghgqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.190300 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8980296a-e6bd-4a93-a4e1-658639922b93" (UID: "8980296a-e6bd-4a93-a4e1-658639922b93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.192252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config" (OuterVolumeSpecName: "config") pod "8980296a-e6bd-4a93-a4e1-658639922b93" (UID: "8980296a-e6bd-4a93-a4e1-658639922b93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.202092 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8980296a-e6bd-4a93-a4e1-658639922b93" (UID: "8980296a-e6bd-4a93-a4e1-658639922b93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.230915 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8980296a-e6bd-4a93-a4e1-658639922b93" (UID: "8980296a-e6bd-4a93-a4e1-658639922b93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248874 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248914 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgqd\" (UniqueName: \"kubernetes.io/projected/8980296a-e6bd-4a93-a4e1-658639922b93-kube-api-access-ghgqd\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248927 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9978\" (UniqueName: \"kubernetes.io/projected/b3421511-356b-4a61-ac23-fa0915c8a6df-kube-api-access-t9978\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248937 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248946 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248955 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8980296a-e6bd-4a93-a4e1-658639922b93-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.248964 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3421511-356b-4a61-ac23-fa0915c8a6df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.562120 4865 generic.go:334] "Generic (PLEG): container finished" podID="8980296a-e6bd-4a93-a4e1-658639922b93" containerID="b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275" exitCode=0 Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.562227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" event={"ID":"8980296a-e6bd-4a93-a4e1-658639922b93","Type":"ContainerDied","Data":"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275"} Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.562335 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.562425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-w2npd" event={"ID":"8980296a-e6bd-4a93-a4e1-658639922b93","Type":"ContainerDied","Data":"27e98ddb7eed3ab8c858d1453f7530c404e4d126246da01bf4dff9cb2d4b0564"} Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.562479 4865 scope.go:117] "RemoveContainer" containerID="b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.569701 4865 generic.go:334] "Generic (PLEG): container finished" podID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerID="4b41d04745647e56c67ca04a28da2cb9f11bdd2ffd14976f34616991d1668c63" exitCode=0 Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.569787 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2c6pd" event={"ID":"354e44fb-0dd9-4935-b7b6-da13d52fb91c","Type":"ContainerDied","Data":"4b41d04745647e56c67ca04a28da2cb9f11bdd2ffd14976f34616991d1668c63"} Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.569826 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2c6pd" event={"ID":"354e44fb-0dd9-4935-b7b6-da13d52fb91c","Type":"ContainerStarted","Data":"6f58a2ff101d0d85b8223d4268a045568b756837809f7708e3c1d1a4f9ff662d"} Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.574349 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.575229 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.575255 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.575386 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:14.575302832 +0000 UTC m=+1034.899009893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.576850 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mzw96" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.576895 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mzw96" event={"ID":"b3421511-356b-4a61-ac23-fa0915c8a6df","Type":"ContainerDied","Data":"7ee16d18a69f64c5b93e72d8fde7ecc7db14bbe51767672f95336c7f38761ccb"} Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.576927 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ee16d18a69f64c5b93e72d8fde7ecc7db14bbe51767672f95336c7f38761ccb" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.595117 4865 scope.go:117] "RemoveContainer" containerID="7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.616975 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.628008 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-w2npd"] Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.644352 4865 scope.go:117] "RemoveContainer" containerID="b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275" Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.644912 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275\": container with ID starting with b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275 not found: ID does not exist" containerID="b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.644959 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275"} err="failed to get container status \"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275\": rpc error: code = NotFound desc = could not find container \"b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275\": container with ID starting with b07e7b4aa2fe26ad87bd5ed0910221547a23d19b6dcd8c28783b11807da8a275 not found: ID does not exist" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.644980 4865 scope.go:117] "RemoveContainer" containerID="7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e" Feb 16 23:03:13 crc kubenswrapper[4865]: E0216 23:03:13.645382 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e\": container with ID starting with 7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e not found: ID does not exist" containerID="7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e" Feb 16 23:03:13 crc kubenswrapper[4865]: I0216 23:03:13.645408 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e"} err="failed to get container status \"7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e\": rpc error: code = NotFound desc = could not find container \"7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e\": container with ID starting with 7fcb5c53469d9721661ffc038efccf6b527739b2450f6e86b787d1f820837c6e not found: ID does not exist" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.083653 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2287x" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.136722 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.145987 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.186792 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts\") pod \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.186934 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7699f\" (UniqueName: \"kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f\") pod \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\" (UID: \"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.188466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" (UID: "6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.193134 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f" (OuterVolumeSpecName: "kube-api-access-7699f") pod "6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" (UID: "6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade"). InnerVolumeSpecName "kube-api-access-7699f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.288583 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts\") pod \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.288646 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts\") pod \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.288744 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7dwr\" (UniqueName: \"kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr\") pod \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\" (UID: \"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.288785 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjj6m\" (UniqueName: \"kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m\") pod \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\" (UID: \"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9\") " Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.289212 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.289230 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7699f\" (UniqueName: \"kubernetes.io/projected/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade-kube-api-access-7699f\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.289332 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" (UID: "05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.289438 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" (UID: "49da7ade-9fd4-4cb1-a47a-4a07a038a7e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.292800 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m" (OuterVolumeSpecName: "kube-api-access-cjj6m") pod "05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" (UID: "05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9"). InnerVolumeSpecName "kube-api-access-cjj6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.293233 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr" (OuterVolumeSpecName: "kube-api-access-z7dwr") pod "49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" (UID: "49da7ade-9fd4-4cb1-a47a-4a07a038a7e8"). InnerVolumeSpecName "kube-api-access-z7dwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.390466 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.390497 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.390506 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7dwr\" (UniqueName: \"kubernetes.io/projected/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8-kube-api-access-z7dwr\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.390517 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjj6m\" (UniqueName: \"kubernetes.io/projected/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9-kube-api-access-cjj6m\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.436855 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314370d0-0193-485a-8bed-dd37e1535b65" path="/var/lib/kubelet/pods/314370d0-0193-485a-8bed-dd37e1535b65/volumes" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.437623 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" path="/var/lib/kubelet/pods/8980296a-e6bd-4a93-a4e1-658639922b93/volumes" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.589215 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-82d6-account-create-update-2499b" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.589219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-82d6-account-create-update-2499b" event={"ID":"05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9","Type":"ContainerDied","Data":"ee6451d2148154f2b3b651e7ecfb754abcbdacdac32ac32b2acf264d74bb3b9b"} Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.589393 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6451d2148154f2b3b651e7ecfb754abcbdacdac32ac32b2acf264d74bb3b9b" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.593955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.594730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-93e7-account-create-update-2v9fc" event={"ID":"49da7ade-9fd4-4cb1-a47a-4a07a038a7e8","Type":"ContainerDied","Data":"ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41"} Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.595086 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9fff80eb7ff88e3db6bf26092d9fb06a9e551a2714a5f1e3fd2f61ba9aef41" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.595155 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-93e7-account-create-update-2v9fc" Feb 16 23:03:14 crc kubenswrapper[4865]: E0216 23:03:14.594179 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:14 crc kubenswrapper[4865]: E0216 23:03:14.595423 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:14 crc kubenswrapper[4865]: E0216 23:03:14.595656 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:16.595627369 +0000 UTC m=+1036.919334330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.601557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2287x" event={"ID":"6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade","Type":"ContainerDied","Data":"57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7"} Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.601611 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57a53803ea92021278727c0aaf00247e4413c216c200fff4c233ee37a90f47e7" Feb 16 23:03:14 crc kubenswrapper[4865]: I0216 23:03:14.601630 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2287x" Feb 16 23:03:15 crc kubenswrapper[4865]: I0216 23:03:15.613046 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2c6pd" event={"ID":"354e44fb-0dd9-4935-b7b6-da13d52fb91c","Type":"ContainerStarted","Data":"666baa91ced7aab1f1c04a9b58f8a20da2d8fc4a4f0ee80ff47a3c1c55c016ff"} Feb 16 23:03:15 crc kubenswrapper[4865]: I0216 23:03:15.613659 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:15 crc kubenswrapper[4865]: I0216 23:03:15.637939 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-2c6pd" podStartSLOduration=4.6379191859999995 podStartE2EDuration="4.637919186s" podCreationTimestamp="2026-02-16 23:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:15.632240236 +0000 UTC m=+1035.955947197" watchObservedRunningTime="2026-02-16 23:03:15.637919186 +0000 UTC m=+1035.961626147" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.591271 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gq4bx"] Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594042 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594058 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594075 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594082 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594100 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="init" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594106 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="init" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594118 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3421511-356b-4a61-ac23-fa0915c8a6df" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594123 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3421511-356b-4a61-ac23-fa0915c8a6df" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594133 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="dnsmasq-dns" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594140 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="dnsmasq-dns" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.594154 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594160 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594330 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3421511-356b-4a61-ac23-fa0915c8a6df" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594342 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" containerName="mariadb-database-create" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594356 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8980296a-e6bd-4a93-a4e1-658639922b93" containerName="dnsmasq-dns" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594363 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594373 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" containerName="mariadb-account-create-update" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.594891 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.597654 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.597769 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.598861 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.613019 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gq4bx"] Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.643320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.643476 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.643496 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:16 crc kubenswrapper[4865]: E0216 23:03:16.643542 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:20.643524237 +0000 UTC m=+1040.967231198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.744633 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745585 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745642 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9945b\" (UniqueName: \"kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745722 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745825 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.745847 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.846904 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.846947 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.846985 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.847013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.847032 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.847057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9945b\" (UniqueName: \"kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.847107 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.848204 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.849556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.849716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.859917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.865687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.881671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.887034 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9945b\" (UniqueName: \"kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b\") pod \"swift-ring-rebalance-gq4bx\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:16 crc kubenswrapper[4865]: I0216 23:03:16.923862 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.371720 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gq4bx"] Feb 16 23:03:17 crc kubenswrapper[4865]: W0216 23:03:17.392269 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342162e1_dfde_4ad8_b1e6_0a4afc9dbdf3.slice/crio-445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307 WatchSource:0}: Error finding container 445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307: Status 404 returned error can't find the container with id 445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307 Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.444321 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j2d77"] Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.445343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.448195 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.460186 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j2d77"] Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.573117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc58\" (UniqueName: \"kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.573227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.661177 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gq4bx" event={"ID":"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3","Type":"ContainerStarted","Data":"445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307"} Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.674632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.674776 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwc58\" (UniqueName: \"kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.676194 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.703413 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwc58\" (UniqueName: \"kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58\") pod \"root-account-create-update-j2d77\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:17 crc kubenswrapper[4865]: I0216 23:03:17.776783 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:18 crc kubenswrapper[4865]: I0216 23:03:18.284029 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j2d77"] Feb 16 23:03:20 crc kubenswrapper[4865]: I0216 23:03:20.733588 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:20 crc kubenswrapper[4865]: E0216 23:03:20.733849 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:20 crc kubenswrapper[4865]: E0216 23:03:20.733882 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:20 crc kubenswrapper[4865]: E0216 23:03:20.733961 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:28.733939272 +0000 UTC m=+1049.057646233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:21 crc kubenswrapper[4865]: I0216 23:03:21.701736 4865 generic.go:334] "Generic (PLEG): container finished" podID="9f530b91-ceff-467a-a146-60716412bbeb" containerID="8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3" exitCode=0 Feb 16 23:03:21 crc kubenswrapper[4865]: I0216 23:03:21.701802 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerDied","Data":"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3"} Feb 16 23:03:21 crc kubenswrapper[4865]: I0216 23:03:21.704538 4865 generic.go:334] "Generic (PLEG): container finished" podID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerID="ce23f91df3b645e3c619579ce46fec34711e865e6983a7989c7f9f247f88c0f1" exitCode=0 Feb 16 23:03:21 crc kubenswrapper[4865]: I0216 23:03:21.704583 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerDied","Data":"ce23f91df3b645e3c619579ce46fec34711e865e6983a7989c7f9f247f88c0f1"} Feb 16 23:03:21 crc kubenswrapper[4865]: I0216 23:03:21.800957 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 23:03:22 crc kubenswrapper[4865]: I0216 23:03:22.120419 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:22 crc kubenswrapper[4865]: I0216 23:03:22.169492 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:22 crc kubenswrapper[4865]: I0216 23:03:22.170106 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="dnsmasq-dns" containerID="cri-o://ee64484ff28a5c729602cd4a9f740fc4d9dfa390c637a980edec168ee974f85f" gracePeriod=10 Feb 16 23:03:22 crc kubenswrapper[4865]: I0216 23:03:22.738055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" event={"ID":"2512ebf7-5fe8-410c-80bd-2568c4c54572","Type":"ContainerDied","Data":"ee64484ff28a5c729602cd4a9f740fc4d9dfa390c637a980edec168ee974f85f"} Feb 16 23:03:22 crc kubenswrapper[4865]: I0216 23:03:22.738428 4865 generic.go:334] "Generic (PLEG): container finished" podID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerID="ee64484ff28a5c729602cd4a9f740fc4d9dfa390c637a980edec168ee974f85f" exitCode=0 Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.132861 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.348143 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.479715 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf5fj\" (UniqueName: \"kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj\") pod \"2512ebf7-5fe8-410c-80bd-2568c4c54572\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.479849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb\") pod \"2512ebf7-5fe8-410c-80bd-2568c4c54572\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.479868 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc\") pod \"2512ebf7-5fe8-410c-80bd-2568c4c54572\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.479890 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config\") pod \"2512ebf7-5fe8-410c-80bd-2568c4c54572\" (UID: \"2512ebf7-5fe8-410c-80bd-2568c4c54572\") " Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.484269 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj" (OuterVolumeSpecName: "kube-api-access-mf5fj") pod "2512ebf7-5fe8-410c-80bd-2568c4c54572" (UID: "2512ebf7-5fe8-410c-80bd-2568c4c54572"). InnerVolumeSpecName "kube-api-access-mf5fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.520628 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2512ebf7-5fe8-410c-80bd-2568c4c54572" (UID: "2512ebf7-5fe8-410c-80bd-2568c4c54572"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.542562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2512ebf7-5fe8-410c-80bd-2568c4c54572" (UID: "2512ebf7-5fe8-410c-80bd-2568c4c54572"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.552189 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config" (OuterVolumeSpecName: "config") pod "2512ebf7-5fe8-410c-80bd-2568c4c54572" (UID: "2512ebf7-5fe8-410c-80bd-2568c4c54572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.581732 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf5fj\" (UniqueName: \"kubernetes.io/projected/2512ebf7-5fe8-410c-80bd-2568c4c54572-kube-api-access-mf5fj\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.581755 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.581764 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.581773 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2512ebf7-5fe8-410c-80bd-2568c4c54572-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.787903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2dd9q" event={"ID":"38c44ae1-9f36-4df8-ba88-443ca78fe47a","Type":"ContainerStarted","Data":"041b610c14d73a77e0e73b4c24896939b5d7abc09da5dafb68e4ce47448e797e"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.795295 4865 generic.go:334] "Generic (PLEG): container finished" podID="ce6284b9-07ef-4e41-b832-48c1addaf092" containerID="e068f2a49367efebfcb339f4649dd487ab7cfb5f3dacc179a05e14c59a051bbd" exitCode=0 Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.795368 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2d77" event={"ID":"ce6284b9-07ef-4e41-b832-48c1addaf092","Type":"ContainerDied","Data":"e068f2a49367efebfcb339f4649dd487ab7cfb5f3dacc179a05e14c59a051bbd"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.795396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2d77" event={"ID":"ce6284b9-07ef-4e41-b832-48c1addaf092","Type":"ContainerStarted","Data":"cfe69eab3482e5478c7cf8a57185733e9b9c79ca0b2c4faecf22f06dfeb15c1e"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.799901 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerStarted","Data":"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.800084 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.805416 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" event={"ID":"2512ebf7-5fe8-410c-80bd-2568c4c54572","Type":"ContainerDied","Data":"3a5b98a51be1833429e45eb1c9d48baa3da448ed071f5ddbaa11108a86383cc0"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.805596 4865 scope.go:117] "RemoveContainer" containerID="ee64484ff28a5c729602cd4a9f740fc4d9dfa390c637a980edec168ee974f85f" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.805704 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-w8bqw" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.812173 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerStarted","Data":"f5d2ae589030f7114001a723de2dbeed973554698c74b500878309864eb70800"} Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.812423 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.819257 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2dd9q" podStartSLOduration=1.794920203 podStartE2EDuration="17.819240591s" podCreationTimestamp="2026-02-16 23:03:09 +0000 UTC" firstStartedPulling="2026-02-16 23:03:10.149445364 +0000 UTC m=+1030.473152325" lastFinishedPulling="2026-02-16 23:03:26.173765752 +0000 UTC m=+1046.497472713" observedRunningTime="2026-02-16 23:03:26.801902002 +0000 UTC m=+1047.125608983" watchObservedRunningTime="2026-02-16 23:03:26.819240591 +0000 UTC m=+1047.142947542" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.841780 4865 scope.go:117] "RemoveContainer" containerID="250d2f085398f0ca0109cca71a34507fe8e3330d79325865b9268233d3232be8" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.901155 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.792616052 podStartE2EDuration="1m2.90114192s" podCreationTimestamp="2026-02-16 23:02:24 +0000 UTC" firstStartedPulling="2026-02-16 23:02:27.025201737 +0000 UTC m=+987.348908698" lastFinishedPulling="2026-02-16 23:02:45.133727605 +0000 UTC m=+1005.457434566" observedRunningTime="2026-02-16 23:03:26.854564527 +0000 UTC m=+1047.178271488" watchObservedRunningTime="2026-02-16 23:03:26.90114192 +0000 UTC m=+1047.224848881" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.902061 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.718879331 podStartE2EDuration="1m2.902056526s" podCreationTimestamp="2026-02-16 23:02:24 +0000 UTC" firstStartedPulling="2026-02-16 23:02:26.681532607 +0000 UTC m=+987.005239568" lastFinishedPulling="2026-02-16 23:02:47.864709792 +0000 UTC m=+1008.188416763" observedRunningTime="2026-02-16 23:03:26.899263657 +0000 UTC m=+1047.222970618" watchObservedRunningTime="2026-02-16 23:03:26.902056526 +0000 UTC m=+1047.225763477" Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.924731 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:26 crc kubenswrapper[4865]: I0216 23:03:26.929757 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-w8bqw"] Feb 16 23:03:28 crc kubenswrapper[4865]: I0216 23:03:28.424265 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" path="/var/lib/kubelet/pods/2512ebf7-5fe8-410c-80bd-2568c4c54572/volumes" Feb 16 23:03:28 crc kubenswrapper[4865]: I0216 23:03:28.820002 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:28 crc kubenswrapper[4865]: E0216 23:03:28.820245 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 23:03:28 crc kubenswrapper[4865]: E0216 23:03:28.820259 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 23:03:28 crc kubenswrapper[4865]: E0216 23:03:28.820330 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift podName:34486574-e35d-4674-a0b3-57d122050e66 nodeName:}" failed. No retries permitted until 2026-02-16 23:03:44.820310339 +0000 UTC m=+1065.144017300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift") pod "swift-storage-0" (UID: "34486574-e35d-4674-a0b3-57d122050e66") : configmap "swift-ring-files" not found Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.339496 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.430657 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts\") pod \"ce6284b9-07ef-4e41-b832-48c1addaf092\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.430905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwc58\" (UniqueName: \"kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58\") pod \"ce6284b9-07ef-4e41-b832-48c1addaf092\" (UID: \"ce6284b9-07ef-4e41-b832-48c1addaf092\") " Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.431573 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce6284b9-07ef-4e41-b832-48c1addaf092" (UID: "ce6284b9-07ef-4e41-b832-48c1addaf092"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.436427 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58" (OuterVolumeSpecName: "kube-api-access-nwc58") pod "ce6284b9-07ef-4e41-b832-48c1addaf092" (UID: "ce6284b9-07ef-4e41-b832-48c1addaf092"). InnerVolumeSpecName "kube-api-access-nwc58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.443834 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-plt5q" podUID="abf5edf2-8442-4aca-b35b-051b9f366b9a" containerName="ovn-controller" probeResult="failure" output=< Feb 16 23:03:29 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 23:03:29 crc kubenswrapper[4865]: > Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.478916 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.488832 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vmd6x" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.532882 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce6284b9-07ef-4e41-b832-48c1addaf092-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.532923 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwc58\" (UniqueName: \"kubernetes.io/projected/ce6284b9-07ef-4e41-b832-48c1addaf092-kube-api-access-nwc58\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.740403 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-plt5q-config-tv5cf"] Feb 16 23:03:29 crc kubenswrapper[4865]: E0216 23:03:29.741480 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="init" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.741500 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="init" Feb 16 23:03:29 crc kubenswrapper[4865]: E0216 23:03:29.741540 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="dnsmasq-dns" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.741548 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="dnsmasq-dns" Feb 16 23:03:29 crc kubenswrapper[4865]: E0216 23:03:29.741565 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6284b9-07ef-4e41-b832-48c1addaf092" containerName="mariadb-account-create-update" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.741578 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6284b9-07ef-4e41-b832-48c1addaf092" containerName="mariadb-account-create-update" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.741994 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2512ebf7-5fe8-410c-80bd-2568c4c54572" containerName="dnsmasq-dns" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.742022 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6284b9-07ef-4e41-b832-48c1addaf092" containerName="mariadb-account-create-update" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.742973 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.751197 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.765495 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-plt5q-config-tv5cf"] Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838384 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8p6\" (UniqueName: \"kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2d77" event={"ID":"ce6284b9-07ef-4e41-b832-48c1addaf092","Type":"ContainerDied","Data":"cfe69eab3482e5478c7cf8a57185733e9b9c79ca0b2c4faecf22f06dfeb15c1e"} Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838685 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe69eab3482e5478c7cf8a57185733e9b9c79ca0b2c4faecf22f06dfeb15c1e" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838756 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838766 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2d77" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838808 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.838873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.841944 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gq4bx" event={"ID":"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3","Type":"ContainerStarted","Data":"2c3e0620dbbf1b5e01df5ec7d001d89005fe7dbcc8c4660f23e4144c0ed4f8a7"} Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.871919 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gq4bx" podStartSLOduration=2.160042274 podStartE2EDuration="13.871903787s" podCreationTimestamp="2026-02-16 23:03:16 +0000 UTC" firstStartedPulling="2026-02-16 23:03:17.396541498 +0000 UTC m=+1037.720248459" lastFinishedPulling="2026-02-16 23:03:29.108403011 +0000 UTC m=+1049.432109972" observedRunningTime="2026-02-16 23:03:29.870455146 +0000 UTC m=+1050.194162107" watchObservedRunningTime="2026-02-16 23:03:29.871903787 +0000 UTC m=+1050.195610748" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941226 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941436 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.941466 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8p6\" (UniqueName: \"kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.942028 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.942102 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.942322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.942676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.944824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:29 crc kubenswrapper[4865]: I0216 23:03:29.958844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8p6\" (UniqueName: \"kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6\") pod \"ovn-controller-plt5q-config-tv5cf\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:30 crc kubenswrapper[4865]: I0216 23:03:30.073788 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:30 crc kubenswrapper[4865]: I0216 23:03:30.541565 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-plt5q-config-tv5cf"] Feb 16 23:03:30 crc kubenswrapper[4865]: I0216 23:03:30.849869 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q-config-tv5cf" event={"ID":"8dc60335-3815-4700-a38d-b3e8ffa463f0","Type":"ContainerStarted","Data":"2f12214fa3dbb0a0eb40c49f66e2714d74b0a8cc82e85fb91dc638a5e51aec89"} Feb 16 23:03:30 crc kubenswrapper[4865]: I0216 23:03:30.850526 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q-config-tv5cf" event={"ID":"8dc60335-3815-4700-a38d-b3e8ffa463f0","Type":"ContainerStarted","Data":"78df49e96270b832f4bbc070d3e544fb793e28c5b5ef80f54627f1452c25a5a8"} Feb 16 23:03:30 crc kubenswrapper[4865]: I0216 23:03:30.865527 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-plt5q-config-tv5cf" podStartSLOduration=1.86550941 podStartE2EDuration="1.86550941s" podCreationTimestamp="2026-02-16 23:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:30.863612247 +0000 UTC m=+1051.187319208" watchObservedRunningTime="2026-02-16 23:03:30.86550941 +0000 UTC m=+1051.189216371" Feb 16 23:03:31 crc kubenswrapper[4865]: I0216 23:03:31.855830 4865 generic.go:334] "Generic (PLEG): container finished" podID="8dc60335-3815-4700-a38d-b3e8ffa463f0" containerID="2f12214fa3dbb0a0eb40c49f66e2714d74b0a8cc82e85fb91dc638a5e51aec89" exitCode=0 Feb 16 23:03:31 crc kubenswrapper[4865]: I0216 23:03:31.856054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q-config-tv5cf" event={"ID":"8dc60335-3815-4700-a38d-b3e8ffa463f0","Type":"ContainerDied","Data":"2f12214fa3dbb0a0eb40c49f66e2714d74b0a8cc82e85fb91dc638a5e51aec89"} Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.234204 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.403202 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt8p6\" (UniqueName: \"kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.403335 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.405357 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.405457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.405519 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.405603 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.405754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts\") pod \"8dc60335-3815-4700-a38d-b3e8ffa463f0\" (UID: \"8dc60335-3815-4700-a38d-b3e8ffa463f0\") " Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.407469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run" (OuterVolumeSpecName: "var-run") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.407499 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.408011 4865 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.408056 4865 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.408084 4865 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dc60335-3815-4700-a38d-b3e8ffa463f0-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.408606 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.408900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts" (OuterVolumeSpecName: "scripts") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.413161 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6" (OuterVolumeSpecName: "kube-api-access-vt8p6") pod "8dc60335-3815-4700-a38d-b3e8ffa463f0" (UID: "8dc60335-3815-4700-a38d-b3e8ffa463f0"). InnerVolumeSpecName "kube-api-access-vt8p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.511227 4865 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.511256 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt8p6\" (UniqueName: \"kubernetes.io/projected/8dc60335-3815-4700-a38d-b3e8ffa463f0-kube-api-access-vt8p6\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.511270 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dc60335-3815-4700-a38d-b3e8ffa463f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.880478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-plt5q-config-tv5cf" event={"ID":"8dc60335-3815-4700-a38d-b3e8ffa463f0","Type":"ContainerDied","Data":"78df49e96270b832f4bbc070d3e544fb793e28c5b5ef80f54627f1452c25a5a8"} Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.880556 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78df49e96270b832f4bbc070d3e544fb793e28c5b5ef80f54627f1452c25a5a8" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.880669 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-plt5q-config-tv5cf" Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.887260 4865 generic.go:334] "Generic (PLEG): container finished" podID="38c44ae1-9f36-4df8-ba88-443ca78fe47a" containerID="041b610c14d73a77e0e73b4c24896939b5d7abc09da5dafb68e4ce47448e797e" exitCode=0 Feb 16 23:03:33 crc kubenswrapper[4865]: I0216 23:03:33.887366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2dd9q" event={"ID":"38c44ae1-9f36-4df8-ba88-443ca78fe47a","Type":"ContainerDied","Data":"041b610c14d73a77e0e73b4c24896939b5d7abc09da5dafb68e4ce47448e797e"} Feb 16 23:03:34 crc kubenswrapper[4865]: I0216 23:03:34.014587 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-plt5q-config-tv5cf"] Feb 16 23:03:34 crc kubenswrapper[4865]: I0216 23:03:34.020794 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-plt5q-config-tv5cf"] Feb 16 23:03:34 crc kubenswrapper[4865]: I0216 23:03:34.425939 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc60335-3815-4700-a38d-b3e8ffa463f0" path="/var/lib/kubelet/pods/8dc60335-3815-4700-a38d-b3e8ffa463f0/volumes" Feb 16 23:03:34 crc kubenswrapper[4865]: I0216 23:03:34.428323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-plt5q" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.397564 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.548206 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data\") pod \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.548271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data\") pod \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.548432 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knl4z\" (UniqueName: \"kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z\") pod \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.548534 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle\") pod \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\" (UID: \"38c44ae1-9f36-4df8-ba88-443ca78fe47a\") " Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.553732 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "38c44ae1-9f36-4df8-ba88-443ca78fe47a" (UID: "38c44ae1-9f36-4df8-ba88-443ca78fe47a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.561032 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z" (OuterVolumeSpecName: "kube-api-access-knl4z") pod "38c44ae1-9f36-4df8-ba88-443ca78fe47a" (UID: "38c44ae1-9f36-4df8-ba88-443ca78fe47a"). InnerVolumeSpecName "kube-api-access-knl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.574248 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38c44ae1-9f36-4df8-ba88-443ca78fe47a" (UID: "38c44ae1-9f36-4df8-ba88-443ca78fe47a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.592579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data" (OuterVolumeSpecName: "config-data") pod "38c44ae1-9f36-4df8-ba88-443ca78fe47a" (UID: "38c44ae1-9f36-4df8-ba88-443ca78fe47a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.650694 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.650735 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.650748 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knl4z\" (UniqueName: \"kubernetes.io/projected/38c44ae1-9f36-4df8-ba88-443ca78fe47a-kube-api-access-knl4z\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.650758 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c44ae1-9f36-4df8-ba88-443ca78fe47a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.908647 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2dd9q" event={"ID":"38c44ae1-9f36-4df8-ba88-443ca78fe47a","Type":"ContainerDied","Data":"f6cc21af8cd22f5cb2d3e34e4384cd138a674768417468d9b4f7e80123a41285"} Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.908693 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6cc21af8cd22f5cb2d3e34e4384cd138a674768417468d9b4f7e80123a41285" Feb 16 23:03:35 crc kubenswrapper[4865]: I0216 23:03:35.908728 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2dd9q" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.281438 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.379194 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:03:36 crc kubenswrapper[4865]: E0216 23:03:36.379579 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc60335-3815-4700-a38d-b3e8ffa463f0" containerName="ovn-config" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.379597 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc60335-3815-4700-a38d-b3e8ffa463f0" containerName="ovn-config" Feb 16 23:03:36 crc kubenswrapper[4865]: E0216 23:03:36.379624 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c44ae1-9f36-4df8-ba88-443ca78fe47a" containerName="glance-db-sync" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.379631 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c44ae1-9f36-4df8-ba88-443ca78fe47a" containerName="glance-db-sync" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.379773 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc60335-3815-4700-a38d-b3e8ffa463f0" containerName="ovn-config" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.379791 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c44ae1-9f36-4df8-ba88-443ca78fe47a" containerName="glance-db-sync" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.381363 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.393500 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.464437 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.464485 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.464532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.464711 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb47\" (UniqueName: \"kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.464884 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.566722 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.567119 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb47\" (UniqueName: \"kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.567171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.567240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.567264 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.567587 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.568020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.568144 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.568592 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.586614 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb47\" (UniqueName: \"kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47\") pod \"dnsmasq-dns-5b946c75cc-cmrhq\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.709878 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.966826 4865 generic.go:334] "Generic (PLEG): container finished" podID="342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" containerID="2c3e0620dbbf1b5e01df5ec7d001d89005fe7dbcc8c4660f23e4144c0ed4f8a7" exitCode=0 Feb 16 23:03:36 crc kubenswrapper[4865]: I0216 23:03:36.967043 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gq4bx" event={"ID":"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3","Type":"ContainerDied","Data":"2c3e0620dbbf1b5e01df5ec7d001d89005fe7dbcc8c4660f23e4144c0ed4f8a7"} Feb 16 23:03:37 crc kubenswrapper[4865]: I0216 23:03:37.229737 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:03:37 crc kubenswrapper[4865]: I0216 23:03:37.980090 4865 generic.go:334] "Generic (PLEG): container finished" podID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerID="8c65117d8ae27e4e3bee9b5d12360857b6b599279c1359b964481e9cfbb96d61" exitCode=0 Feb 16 23:03:37 crc kubenswrapper[4865]: I0216 23:03:37.980148 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" event={"ID":"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a","Type":"ContainerDied","Data":"8c65117d8ae27e4e3bee9b5d12360857b6b599279c1359b964481e9cfbb96d61"} Feb 16 23:03:37 crc kubenswrapper[4865]: I0216 23:03:37.980489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" event={"ID":"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a","Type":"ContainerStarted","Data":"de1e22a2dc252adf49e4e9307030567673c3248fb9a352dc0541ac7098627010"} Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.367791 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498126 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498251 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9945b\" (UniqueName: \"kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498320 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498382 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498494 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.498557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts\") pod \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\" (UID: \"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3\") " Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.499474 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.500098 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.504812 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b" (OuterVolumeSpecName: "kube-api-access-9945b") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "kube-api-access-9945b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.506407 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.525687 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.534689 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts" (OuterVolumeSpecName: "scripts") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.536430 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" (UID: "342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.600741 4865 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601001 4865 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601010 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601031 4865 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601043 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9945b\" (UniqueName: \"kubernetes.io/projected/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-kube-api-access-9945b\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601053 4865 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.601064 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.994200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" event={"ID":"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a","Type":"ContainerStarted","Data":"723cf77b5d28b547b58bce73488a13ab6681ba903e51af345e10635b9efe79c0"} Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.994550 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.997553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gq4bx" event={"ID":"342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3","Type":"ContainerDied","Data":"445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307"} Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.997624 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445b50f98011a612ae6e36d85846fb14fd852c598c6f08df13fb67ccf859c307" Feb 16 23:03:38 crc kubenswrapper[4865]: I0216 23:03:38.998641 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gq4bx" Feb 16 23:03:39 crc kubenswrapper[4865]: I0216 23:03:39.035602 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" podStartSLOduration=3.035568346 podStartE2EDuration="3.035568346s" podCreationTimestamp="2026-02-16 23:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:39.026160351 +0000 UTC m=+1059.349867352" watchObservedRunningTime="2026-02-16 23:03:39.035568346 +0000 UTC m=+1059.359275347" Feb 16 23:03:44 crc kubenswrapper[4865]: I0216 23:03:44.822061 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:44 crc kubenswrapper[4865]: I0216 23:03:44.832179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34486574-e35d-4674-a0b3-57d122050e66-etc-swift\") pod \"swift-storage-0\" (UID: \"34486574-e35d-4674-a0b3-57d122050e66\") " pod="openstack/swift-storage-0" Feb 16 23:03:44 crc kubenswrapper[4865]: I0216 23:03:44.963515 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 23:03:45 crc kubenswrapper[4865]: I0216 23:03:45.636955 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 23:03:45 crc kubenswrapper[4865]: I0216 23:03:45.955646 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.067849 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"6538f1ee05965eca87ea191a23c78fdce4139ffb90e81758c7b2753feed1b9a4"} Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.335568 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bb7dg"] Feb 16 23:03:46 crc kubenswrapper[4865]: E0216 23:03:46.336077 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" containerName="swift-ring-rebalance" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.336108 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" containerName="swift-ring-rebalance" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.336327 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3" containerName="swift-ring-rebalance" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.336991 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.358245 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bb7dg"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.436456 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z9hq9"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.438901 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.452778 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.452948 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w86d\" (UniqueName: \"kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.460655 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d54b-account-create-update-zmm8f"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.462028 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.469693 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.507271 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d54b-account-create-update-zmm8f"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554310 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w86d\" (UniqueName: \"kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7m78\" (UniqueName: \"kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554561 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrmdt\" (UniqueName: \"kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.554603 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.556647 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.565630 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z9hq9"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.609892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w86d\" (UniqueName: \"kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d\") pod \"cinder-db-create-bb7dg\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.635840 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-h7whk"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.637584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.651444 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fd20-account-create-update-glcp6"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.653020 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.655501 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.656672 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrmdt\" (UniqueName: \"kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.656719 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.656773 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7m78\" (UniqueName: \"kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.656821 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.657941 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.658743 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.659363 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.659702 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h7whk"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.665192 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fd20-account-create-update-glcp6"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.697351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7m78\" (UniqueName: \"kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78\") pod \"barbican-db-create-z9hq9\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.700321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrmdt\" (UniqueName: \"kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt\") pod \"cinder-d54b-account-create-update-zmm8f\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.712408 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.758597 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.758658 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24qv4\" (UniqueName: \"kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.758741 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.758805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxl2v\" (UniqueName: \"kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.766699 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.777978 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4mdbk"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.779057 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.782457 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.782808 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.783032 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.784124 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sj5dj" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.788822 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4mdbk"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.795863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.870692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.870788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkm7j\" (UniqueName: \"kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.872269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24qv4\" (UniqueName: \"kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.872653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874156 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874669 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874818 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxl2v\" (UniqueName: \"kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.874882 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-2c6pd" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="dnsmasq-dns" containerID="cri-o://666baa91ced7aab1f1c04a9b58f8a20da2d8fc4a4f0ee80ff47a3c1c55c016ff" gracePeriod=10 Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.875463 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.890508 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a307-account-create-update-fccrh"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.898451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.909472 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24qv4\" (UniqueName: \"kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4\") pod \"neutron-db-create-h7whk\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.915542 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.918997 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a307-account-create-update-fccrh"] Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.927871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxl2v\" (UniqueName: \"kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v\") pod \"barbican-fd20-account-create-update-glcp6\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.965236 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.976556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.976629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ffv8\" (UniqueName: \"kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.976659 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.976690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.976822 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkm7j\" (UniqueName: \"kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.980500 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.981030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.989418 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:46 crc kubenswrapper[4865]: I0216 23:03:46.997687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkm7j\" (UniqueName: \"kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j\") pod \"keystone-db-sync-4mdbk\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.079078 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ffv8\" (UniqueName: \"kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.079159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.080369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.081674 4865 generic.go:334] "Generic (PLEG): container finished" podID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerID="666baa91ced7aab1f1c04a9b58f8a20da2d8fc4a4f0ee80ff47a3c1c55c016ff" exitCode=0 Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.081750 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2c6pd" event={"ID":"354e44fb-0dd9-4935-b7b6-da13d52fb91c","Type":"ContainerDied","Data":"666baa91ced7aab1f1c04a9b58f8a20da2d8fc4a4f0ee80ff47a3c1c55c016ff"} Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.095882 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ffv8\" (UniqueName: \"kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8\") pod \"neutron-a307-account-create-update-fccrh\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.120016 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2c6pd" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.134834 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.313768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.316550 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bb7dg"] Feb 16 23:03:47 crc kubenswrapper[4865]: W0216 23:03:47.349867 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf977bdae_bdb8_4a49_83e1_55e7264f274b.slice/crio-53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18 WatchSource:0}: Error finding container 53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18: Status 404 returned error can't find the container with id 53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18 Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.746712 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.801554 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb\") pod \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.801625 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgcq\" (UniqueName: \"kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq\") pod \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.801813 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config\") pod \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.801949 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb\") pod \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.802025 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc\") pod \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\" (UID: \"354e44fb-0dd9-4935-b7b6-da13d52fb91c\") " Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.839195 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq" (OuterVolumeSpecName: "kube-api-access-rqgcq") pod "354e44fb-0dd9-4935-b7b6-da13d52fb91c" (UID: "354e44fb-0dd9-4935-b7b6-da13d52fb91c"). InnerVolumeSpecName "kube-api-access-rqgcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.860211 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config" (OuterVolumeSpecName: "config") pod "354e44fb-0dd9-4935-b7b6-da13d52fb91c" (UID: "354e44fb-0dd9-4935-b7b6-da13d52fb91c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.867641 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "354e44fb-0dd9-4935-b7b6-da13d52fb91c" (UID: "354e44fb-0dd9-4935-b7b6-da13d52fb91c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.868031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "354e44fb-0dd9-4935-b7b6-da13d52fb91c" (UID: "354e44fb-0dd9-4935-b7b6-da13d52fb91c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.883946 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "354e44fb-0dd9-4935-b7b6-da13d52fb91c" (UID: "354e44fb-0dd9-4935-b7b6-da13d52fb91c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.905585 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.905641 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgcq\" (UniqueName: \"kubernetes.io/projected/354e44fb-0dd9-4935-b7b6-da13d52fb91c-kube-api-access-rqgcq\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.905659 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.905672 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:47 crc kubenswrapper[4865]: I0216 23:03:47.905687 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/354e44fb-0dd9-4935-b7b6-da13d52fb91c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.070547 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fd20-account-create-update-glcp6"] Feb 16 23:03:48 crc kubenswrapper[4865]: W0216 23:03:48.082460 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1df01bc8_2180_4799_b5bc_786690440fca.slice/crio-992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a WatchSource:0}: Error finding container 992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a: Status 404 returned error can't find the container with id 992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.097757 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bb7dg" event={"ID":"f977bdae-bdb8-4a49-83e1-55e7264f274b","Type":"ContainerStarted","Data":"1fbbce23d24ee24a5d38655a3a93db576cd95ad4bb64e1fb1eadb6a57f62e920"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.097836 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bb7dg" event={"ID":"f977bdae-bdb8-4a49-83e1-55e7264f274b","Type":"ContainerStarted","Data":"53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.102938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"4df879a017e4b17f0d4e86a860670d21ba894e51ce6c2e4055d8b8235ef21566"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.102986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"78dbed43bc15af95b97b293d5d140cc333cb133f25e4751140b4d005230f0b71"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.106928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd20-account-create-update-glcp6" event={"ID":"1df01bc8-2180-4799-b5bc-786690440fca","Type":"ContainerStarted","Data":"992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.125089 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h7whk"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.127699 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2c6pd" event={"ID":"354e44fb-0dd9-4935-b7b6-da13d52fb91c","Type":"ContainerDied","Data":"6f58a2ff101d0d85b8223d4268a045568b756837809f7708e3c1d1a4f9ff662d"} Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.127878 4865 scope.go:117] "RemoveContainer" containerID="666baa91ced7aab1f1c04a9b58f8a20da2d8fc4a4f0ee80ff47a3c1c55c016ff" Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.128060 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2c6pd" Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.134709 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-bb7dg" podStartSLOduration=2.134675296 podStartE2EDuration="2.134675296s" podCreationTimestamp="2026-02-16 23:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:03:48.123174652 +0000 UTC m=+1068.446881613" watchObservedRunningTime="2026-02-16 23:03:48.134675296 +0000 UTC m=+1068.458382247" Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.224767 4865 scope.go:117] "RemoveContainer" containerID="4b41d04745647e56c67ca04a28da2cb9f11bdd2ffd14976f34616991d1668c63" Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.225784 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.235258 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2c6pd"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.261140 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4mdbk"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.285739 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z9hq9"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.292608 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a307-account-create-update-fccrh"] Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.298692 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d54b-account-create-update-zmm8f"] Feb 16 23:03:48 crc kubenswrapper[4865]: W0216 23:03:48.311832 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9bf5a97_df75_436c_b2bf_6b64b55a071e.slice/crio-aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b WatchSource:0}: Error finding container aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b: Status 404 returned error can't find the container with id aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b Feb 16 23:03:48 crc kubenswrapper[4865]: I0216 23:03:48.431569 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" path="/var/lib/kubelet/pods/354e44fb-0dd9-4935-b7b6-da13d52fb91c/volumes" Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.142181 4865 generic.go:334] "Generic (PLEG): container finished" podID="f5828317-93f7-47f3-9769-8dae9b438530" containerID="33fbbf14f436417711a6ace16dae3386e12d3e281feb2cc7fe189550261e1121" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.142624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9hq9" event={"ID":"f5828317-93f7-47f3-9769-8dae9b438530","Type":"ContainerDied","Data":"33fbbf14f436417711a6ace16dae3386e12d3e281feb2cc7fe189550261e1121"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.143054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9hq9" event={"ID":"f5828317-93f7-47f3-9769-8dae9b438530","Type":"ContainerStarted","Data":"a1c793481466c88e1c0d6d059254dad765d6cf5ce142e0ec58dadd365a788c74"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.144710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4mdbk" event={"ID":"9477b3a0-b4e2-4315-ba8a-37d389880da9","Type":"ContainerStarted","Data":"200463f36e585529a48b325f3843b1e1b6de1e7053ca79096190060b6c894eac"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.147727 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"44d68c72ccf9d07233f481ca473656b3ed06adea7d3f3c0ad2a24cfa6ba9036a"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.147750 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"8648310af65bef1318c81413c15f541d584950b10598bed235888074481725f0"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.150363 4865 generic.go:334] "Generic (PLEG): container finished" podID="1df01bc8-2180-4799-b5bc-786690440fca" containerID="52ba73fd47f97c391b846beb424304dafec4565dd8010251cba005c761d0644d" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.150419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd20-account-create-update-glcp6" event={"ID":"1df01bc8-2180-4799-b5bc-786690440fca","Type":"ContainerDied","Data":"52ba73fd47f97c391b846beb424304dafec4565dd8010251cba005c761d0644d"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.158134 4865 generic.go:334] "Generic (PLEG): container finished" podID="277094e2-0b6c-42e2-bcc6-d6afebb1bec1" containerID="3562625e809c35f46d69a71d749fa03e03709ac25ea9b3b159440c97dbb824ff" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.158200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a307-account-create-update-fccrh" event={"ID":"277094e2-0b6c-42e2-bcc6-d6afebb1bec1","Type":"ContainerDied","Data":"3562625e809c35f46d69a71d749fa03e03709ac25ea9b3b159440c97dbb824ff"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.158225 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a307-account-create-update-fccrh" event={"ID":"277094e2-0b6c-42e2-bcc6-d6afebb1bec1","Type":"ContainerStarted","Data":"a3dc3af0257e7f3d443d1b5440223933dab1dafc06933c7db3e05f0562aef1b1"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.160326 4865 generic.go:334] "Generic (PLEG): container finished" podID="d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" containerID="a9c4de3fb21556c2aaa1d161168bfe835b1b22b7a92fbd9783a629a2de02b2f7" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.160394 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7whk" event={"ID":"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e","Type":"ContainerDied","Data":"a9c4de3fb21556c2aaa1d161168bfe835b1b22b7a92fbd9783a629a2de02b2f7"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.160425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7whk" event={"ID":"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e","Type":"ContainerStarted","Data":"6927c67f2d6d4305d42ad03d7ef8c7246e99d5081fda9b9dbe00eaf911aa5fce"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.161954 4865 generic.go:334] "Generic (PLEG): container finished" podID="d9bf5a97-df75-436c-b2bf-6b64b55a071e" containerID="cae98d5541f321e40ea17149133353d092fe67ab34a2fec1e45f3053bc089a81" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.162047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d54b-account-create-update-zmm8f" event={"ID":"d9bf5a97-df75-436c-b2bf-6b64b55a071e","Type":"ContainerDied","Data":"cae98d5541f321e40ea17149133353d092fe67ab34a2fec1e45f3053bc089a81"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.162078 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d54b-account-create-update-zmm8f" event={"ID":"d9bf5a97-df75-436c-b2bf-6b64b55a071e","Type":"ContainerStarted","Data":"aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b"} Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.163705 4865 generic.go:334] "Generic (PLEG): container finished" podID="f977bdae-bdb8-4a49-83e1-55e7264f274b" containerID="1fbbce23d24ee24a5d38655a3a93db576cd95ad4bb64e1fb1eadb6a57f62e920" exitCode=0 Feb 16 23:03:49 crc kubenswrapper[4865]: I0216 23:03:49.163745 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bb7dg" event={"ID":"f977bdae-bdb8-4a49-83e1-55e7264f274b","Type":"ContainerDied","Data":"1fbbce23d24ee24a5d38655a3a93db576cd95ad4bb64e1fb1eadb6a57f62e920"} Feb 16 23:03:50 crc kubenswrapper[4865]: I0216 23:03:50.204087 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"b786760fca9320e6f521de431fcb6a4656d925c2994575e4d34343e36d23b304"} Feb 16 23:03:51 crc kubenswrapper[4865]: I0216 23:03:51.225458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"7048463d2fc8dd2fb6eb2d093d01ccaef79386d7ea61ab5da9affbed1d4f7301"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.263778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a307-account-create-update-fccrh" event={"ID":"277094e2-0b6c-42e2-bcc6-d6afebb1bec1","Type":"ContainerDied","Data":"a3dc3af0257e7f3d443d1b5440223933dab1dafc06933c7db3e05f0562aef1b1"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.264229 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dc3af0257e7f3d443d1b5440223933dab1dafc06933c7db3e05f0562aef1b1" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.266065 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7whk" event={"ID":"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e","Type":"ContainerDied","Data":"6927c67f2d6d4305d42ad03d7ef8c7246e99d5081fda9b9dbe00eaf911aa5fce"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.266112 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6927c67f2d6d4305d42ad03d7ef8c7246e99d5081fda9b9dbe00eaf911aa5fce" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.268014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d54b-account-create-update-zmm8f" event={"ID":"d9bf5a97-df75-436c-b2bf-6b64b55a071e","Type":"ContainerDied","Data":"aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.268068 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab5c98f0b19d70018bc83f52cbc42a17623eb9a53fbcb436aa35c608ea5f65b" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.269978 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bb7dg" event={"ID":"f977bdae-bdb8-4a49-83e1-55e7264f274b","Type":"ContainerDied","Data":"53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.270136 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53637a47ea4355093d40bef42e38a1adf24358510321a87cdbaac8a6bc994f18" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.272127 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z9hq9" event={"ID":"f5828317-93f7-47f3-9769-8dae9b438530","Type":"ContainerDied","Data":"a1c793481466c88e1c0d6d059254dad765d6cf5ce142e0ec58dadd365a788c74"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.272191 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c793481466c88e1c0d6d059254dad765d6cf5ce142e0ec58dadd365a788c74" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.274413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd20-account-create-update-glcp6" event={"ID":"1df01bc8-2180-4799-b5bc-786690440fca","Type":"ContainerDied","Data":"992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a"} Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.274458 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992689331d28695e5ce3cbedcd55f0c7a4811e05b90abcae420919e95c137e4a" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.310536 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.354928 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24qv4\" (UniqueName: \"kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4\") pod \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.355113 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts\") pod \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\" (UID: \"d7bfd6f5-8048-45ee-a942-dd66d72bcf0e\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.356519 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" (UID: "d7bfd6f5-8048-45ee-a942-dd66d72bcf0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.356859 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.356921 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.363594 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4" (OuterVolumeSpecName: "kube-api-access-24qv4") pod "d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" (UID: "d7bfd6f5-8048-45ee-a942-dd66d72bcf0e"). InnerVolumeSpecName "kube-api-access-24qv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.413853 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.447955 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.457677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxl2v\" (UniqueName: \"kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v\") pod \"1df01bc8-2180-4799-b5bc-786690440fca\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.457876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts\") pod \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.457931 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts\") pod \"1df01bc8-2180-4799-b5bc-786690440fca\" (UID: \"1df01bc8-2180-4799-b5bc-786690440fca\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.457999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ffv8\" (UniqueName: \"kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8\") pod \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\" (UID: \"277094e2-0b6c-42e2-bcc6-d6afebb1bec1\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.458706 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24qv4\" (UniqueName: \"kubernetes.io/projected/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e-kube-api-access-24qv4\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.459682 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1df01bc8-2180-4799-b5bc-786690440fca" (UID: "1df01bc8-2180-4799-b5bc-786690440fca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.460221 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "277094e2-0b6c-42e2-bcc6-d6afebb1bec1" (UID: "277094e2-0b6c-42e2-bcc6-d6afebb1bec1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.470111 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v" (OuterVolumeSpecName: "kube-api-access-hxl2v") pod "1df01bc8-2180-4799-b5bc-786690440fca" (UID: "1df01bc8-2180-4799-b5bc-786690440fca"). InnerVolumeSpecName "kube-api-access-hxl2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.472550 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.481188 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.481316 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8" (OuterVolumeSpecName: "kube-api-access-7ffv8") pod "277094e2-0b6c-42e2-bcc6-d6afebb1bec1" (UID: "277094e2-0b6c-42e2-bcc6-d6afebb1bec1"). InnerVolumeSpecName "kube-api-access-7ffv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562013 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w86d\" (UniqueName: \"kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d\") pod \"f977bdae-bdb8-4a49-83e1-55e7264f274b\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562096 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts\") pod \"f5828317-93f7-47f3-9769-8dae9b438530\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562254 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7m78\" (UniqueName: \"kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78\") pod \"f5828317-93f7-47f3-9769-8dae9b438530\" (UID: \"f5828317-93f7-47f3-9769-8dae9b438530\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562369 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts\") pod \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562427 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrmdt\" (UniqueName: \"kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt\") pod \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\" (UID: \"d9bf5a97-df75-436c-b2bf-6b64b55a071e\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562525 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts\") pod \"f977bdae-bdb8-4a49-83e1-55e7264f274b\" (UID: \"f977bdae-bdb8-4a49-83e1-55e7264f274b\") " Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562892 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxl2v\" (UniqueName: \"kubernetes.io/projected/1df01bc8-2180-4799-b5bc-786690440fca-kube-api-access-hxl2v\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562913 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562924 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df01bc8-2180-4799-b5bc-786690440fca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.562936 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ffv8\" (UniqueName: \"kubernetes.io/projected/277094e2-0b6c-42e2-bcc6-d6afebb1bec1-kube-api-access-7ffv8\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.563400 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f977bdae-bdb8-4a49-83e1-55e7264f274b" (UID: "f977bdae-bdb8-4a49-83e1-55e7264f274b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.564765 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9bf5a97-df75-436c-b2bf-6b64b55a071e" (UID: "d9bf5a97-df75-436c-b2bf-6b64b55a071e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.564879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5828317-93f7-47f3-9769-8dae9b438530" (UID: "f5828317-93f7-47f3-9769-8dae9b438530"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.567675 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78" (OuterVolumeSpecName: "kube-api-access-f7m78") pod "f5828317-93f7-47f3-9769-8dae9b438530" (UID: "f5828317-93f7-47f3-9769-8dae9b438530"). InnerVolumeSpecName "kube-api-access-f7m78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.567748 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d" (OuterVolumeSpecName: "kube-api-access-6w86d") pod "f977bdae-bdb8-4a49-83e1-55e7264f274b" (UID: "f977bdae-bdb8-4a49-83e1-55e7264f274b"). InnerVolumeSpecName "kube-api-access-6w86d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.568533 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt" (OuterVolumeSpecName: "kube-api-access-nrmdt") pod "d9bf5a97-df75-436c-b2bf-6b64b55a071e" (UID: "d9bf5a97-df75-436c-b2bf-6b64b55a071e"). InnerVolumeSpecName "kube-api-access-nrmdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665510 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bf5a97-df75-436c-b2bf-6b64b55a071e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665579 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrmdt\" (UniqueName: \"kubernetes.io/projected/d9bf5a97-df75-436c-b2bf-6b64b55a071e-kube-api-access-nrmdt\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665611 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f977bdae-bdb8-4a49-83e1-55e7264f274b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665638 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w86d\" (UniqueName: \"kubernetes.io/projected/f977bdae-bdb8-4a49-83e1-55e7264f274b-kube-api-access-6w86d\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665665 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5828317-93f7-47f3-9769-8dae9b438530-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:54 crc kubenswrapper[4865]: I0216 23:03:54.665690 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7m78\" (UniqueName: \"kubernetes.io/projected/f5828317-93f7-47f3-9769-8dae9b438530-kube-api-access-f7m78\") on node \"crc\" DevicePath \"\"" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.290750 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4mdbk" event={"ID":"9477b3a0-b4e2-4315-ba8a-37d389880da9","Type":"ContainerStarted","Data":"eb2e9d72b9438742c436514ace7803be9fce59aefe308d0d4fe76fa6bac731a2"} Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.300873 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a307-account-create-update-fccrh" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.300919 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7whk" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.300955 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bb7dg" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.301332 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z9hq9" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.301338 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd20-account-create-update-glcp6" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.300873 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d54b-account-create-update-zmm8f" Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.301076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"c2b0fbcefe43328135654cb22a48dff389ee1aa369015a19e897b94e1f8d582b"} Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.304695 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"8dcb7a0f14c8d0a89695a96b468b42ec0ac325ea7ce4b6bc63c3d4be56e08d54"} Feb 16 23:03:55 crc kubenswrapper[4865]: I0216 23:03:55.330224 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4mdbk" podStartSLOduration=3.486427407 podStartE2EDuration="9.330194266s" podCreationTimestamp="2026-02-16 23:03:46 +0000 UTC" firstStartedPulling="2026-02-16 23:03:48.284327715 +0000 UTC m=+1068.608034676" lastFinishedPulling="2026-02-16 23:03:54.128094574 +0000 UTC m=+1074.451801535" observedRunningTime="2026-02-16 23:03:55.328525199 +0000 UTC m=+1075.652232180" watchObservedRunningTime="2026-02-16 23:03:55.330194266 +0000 UTC m=+1075.653901267" Feb 16 23:03:56 crc kubenswrapper[4865]: I0216 23:03:56.337612 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"d8d81c79ef5614b40733570f6a65b9bd1b82afacc763adb273d8cc1aa4c3f54f"} Feb 16 23:03:56 crc kubenswrapper[4865]: I0216 23:03:56.338002 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"6d276118909341f31c48ec1acf50883261216b08db42a469a5987a62fb730c09"} Feb 16 23:03:57 crc kubenswrapper[4865]: I0216 23:03:57.354705 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"1375b4762e74c4c578395cd6984bb047401b0758d6c1e8fc4ef370ef25ba7ece"} Feb 16 23:03:57 crc kubenswrapper[4865]: I0216 23:03:57.354796 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"1981b2a7a23ddaa6014d9fc21c2c672df8454a7ffd027b95df13deb090030bbb"} Feb 16 23:03:58 crc kubenswrapper[4865]: I0216 23:03:58.380636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"1b52ca705dc878fc71f08dc9884ee5a9396266734f51a4800c8496eb61f35fa7"} Feb 16 23:03:58 crc kubenswrapper[4865]: I0216 23:03:58.381206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"4728e548b462875033160c797e9aa6597d51ae039e3635e2fa9093ceb1266635"} Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.396968 4865 generic.go:334] "Generic (PLEG): container finished" podID="9477b3a0-b4e2-4315-ba8a-37d389880da9" containerID="eb2e9d72b9438742c436514ace7803be9fce59aefe308d0d4fe76fa6bac731a2" exitCode=0 Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.397085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4mdbk" event={"ID":"9477b3a0-b4e2-4315-ba8a-37d389880da9","Type":"ContainerDied","Data":"eb2e9d72b9438742c436514ace7803be9fce59aefe308d0d4fe76fa6bac731a2"} Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.432695 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"34486574-e35d-4674-a0b3-57d122050e66","Type":"ContainerStarted","Data":"8b873be3062b2b4722e2d92589d6bd36f29789c5b61e1a2dffe370f46e98ec21"} Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.501064 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.276599731 podStartE2EDuration="48.501034577s" podCreationTimestamp="2026-02-16 23:03:11 +0000 UTC" firstStartedPulling="2026-02-16 23:03:45.646265858 +0000 UTC m=+1065.969972839" lastFinishedPulling="2026-02-16 23:03:55.870700724 +0000 UTC m=+1076.194407685" observedRunningTime="2026-02-16 23:03:59.496926982 +0000 UTC m=+1079.820633963" watchObservedRunningTime="2026-02-16 23:03:59.501034577 +0000 UTC m=+1079.824741548" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.823838 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824231 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9bf5a97-df75-436c-b2bf-6b64b55a071e" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824249 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9bf5a97-df75-436c-b2bf-6b64b55a071e" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824307 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5828317-93f7-47f3-9769-8dae9b438530" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824320 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5828317-93f7-47f3-9769-8dae9b438530" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824340 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df01bc8-2180-4799-b5bc-786690440fca" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824348 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df01bc8-2180-4799-b5bc-786690440fca" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824361 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277094e2-0b6c-42e2-bcc6-d6afebb1bec1" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824368 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="277094e2-0b6c-42e2-bcc6-d6afebb1bec1" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824385 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f977bdae-bdb8-4a49-83e1-55e7264f274b" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824392 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f977bdae-bdb8-4a49-83e1-55e7264f274b" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824406 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="dnsmasq-dns" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824415 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="dnsmasq-dns" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824433 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="init" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824441 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="init" Feb 16 23:03:59 crc kubenswrapper[4865]: E0216 23:03:59.824453 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824462 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824851 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f977bdae-bdb8-4a49-83e1-55e7264f274b" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824862 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5828317-93f7-47f3-9769-8dae9b438530" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824879 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df01bc8-2180-4799-b5bc-786690440fca" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824891 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bf5a97-df75-436c-b2bf-6b64b55a071e" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824906 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="277094e2-0b6c-42e2-bcc6-d6afebb1bec1" containerName="mariadb-account-create-update" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824921 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" containerName="mariadb-database-create" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.824940 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e44fb-0dd9-4935-b7b6-da13d52fb91c" containerName="dnsmasq-dns" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.825988 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.829888 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.849977 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884391 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884603 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvp22\" (UniqueName: \"kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.884784 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987315 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987406 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvp22\" (UniqueName: \"kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.987637 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.988303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.988856 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.989083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.989475 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:03:59 crc kubenswrapper[4865]: I0216 23:03:59.989972 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.012043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvp22\" (UniqueName: \"kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22\") pod \"dnsmasq-dns-7ff5475cc9-rlxqr\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.196981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.748915 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.821649 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.906301 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle\") pod \"9477b3a0-b4e2-4315-ba8a-37d389880da9\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.906640 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkm7j\" (UniqueName: \"kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j\") pod \"9477b3a0-b4e2-4315-ba8a-37d389880da9\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.906670 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data\") pod \"9477b3a0-b4e2-4315-ba8a-37d389880da9\" (UID: \"9477b3a0-b4e2-4315-ba8a-37d389880da9\") " Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.910557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j" (OuterVolumeSpecName: "kube-api-access-mkm7j") pod "9477b3a0-b4e2-4315-ba8a-37d389880da9" (UID: "9477b3a0-b4e2-4315-ba8a-37d389880da9"). InnerVolumeSpecName "kube-api-access-mkm7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.949425 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9477b3a0-b4e2-4315-ba8a-37d389880da9" (UID: "9477b3a0-b4e2-4315-ba8a-37d389880da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:00 crc kubenswrapper[4865]: I0216 23:04:00.974504 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data" (OuterVolumeSpecName: "config-data") pod "9477b3a0-b4e2-4315-ba8a-37d389880da9" (UID: "9477b3a0-b4e2-4315-ba8a-37d389880da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.008987 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkm7j\" (UniqueName: \"kubernetes.io/projected/9477b3a0-b4e2-4315-ba8a-37d389880da9-kube-api-access-mkm7j\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.009302 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.009435 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9477b3a0-b4e2-4315-ba8a-37d389880da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.472667 4865 generic.go:334] "Generic (PLEG): container finished" podID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerID="a592aa0cf0c71c42c2b4d3333825dd8933b2029c24fad790c4368d99f18627b7" exitCode=0 Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.472730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" event={"ID":"d48ac9fe-7ab8-4dea-adcc-92447e575076","Type":"ContainerDied","Data":"a592aa0cf0c71c42c2b4d3333825dd8933b2029c24fad790c4368d99f18627b7"} Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.473116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" event={"ID":"d48ac9fe-7ab8-4dea-adcc-92447e575076","Type":"ContainerStarted","Data":"fca5048b4261963aa30d8dbe48dca3e69d76ed36227fb1a28db28c39f90879cd"} Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.480421 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4mdbk" event={"ID":"9477b3a0-b4e2-4315-ba8a-37d389880da9","Type":"ContainerDied","Data":"200463f36e585529a48b325f3843b1e1b6de1e7053ca79096190060b6c894eac"} Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.480460 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200463f36e585529a48b325f3843b1e1b6de1e7053ca79096190060b6c894eac" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.480515 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4mdbk" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.702881 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.718727 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p8nr2"] Feb 16 23:04:01 crc kubenswrapper[4865]: E0216 23:04:01.719164 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9477b3a0-b4e2-4315-ba8a-37d389880da9" containerName="keystone-db-sync" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.719189 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9477b3a0-b4e2-4315-ba8a-37d389880da9" containerName="keystone-db-sync" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.719430 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9477b3a0-b4e2-4315-ba8a-37d389880da9" containerName="keystone-db-sync" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.725354 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.733688 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.733935 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.734091 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.734247 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sj5dj" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.734960 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.759492 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8nr2"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.813200 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.815038 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830240 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830330 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmdr\" (UniqueName: \"kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.830471 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.840025 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.910950 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.922741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.926012 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xq8mw" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.929104 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.929331 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.929509 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.931902 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.931972 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932082 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932106 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmdr\" (UniqueName: \"kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932140 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932220 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932316 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.932344 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgng9\" (UniqueName: \"kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.938642 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.939920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.941353 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.947523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.962211 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.969933 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:01 crc kubenswrapper[4865]: I0216 23:04:01.998262 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmdr\" (UniqueName: \"kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr\") pod \"keystone-bootstrap-p8nr2\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.034919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.034995 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035029 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zp6\" (UniqueName: \"kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035150 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035175 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035239 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035263 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgng9\" (UniqueName: \"kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.035945 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.036322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.036557 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.036948 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.037388 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.041469 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-g9mg8"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.045441 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.049724 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sct9w" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.049872 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.063028 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g9mg8"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.070016 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.072595 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.078497 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.086735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgng9\" (UniqueName: \"kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9\") pod \"dnsmasq-dns-5c5cc7c5ff-bnhc2\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.090336 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.103756 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.103944 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139232 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zp6\" (UniqueName: \"kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139364 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139396 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtks\" (UniqueName: \"kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139441 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139501 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139521 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139536 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139549 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mt7\" (UniqueName: \"kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139567 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139598 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139624 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139657 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139685 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.139701 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.140452 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.140705 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.141787 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.152755 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.168175 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.171848 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.236314 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zp6\" (UniqueName: \"kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6\") pod \"horizon-64454bf745-w42tv\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242633 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242659 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242874 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242946 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtks\" (UniqueName: \"kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.242996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.243023 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.243048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.243067 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.243088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mt7\" (UniqueName: \"kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.243108 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.244410 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.245333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.250493 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.250803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.256680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.258932 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.263488 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.263854 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.264786 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.269927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.276133 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.278694 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.283102 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.291749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.347159 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtks\" (UniqueName: \"kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks\") pod \"cinder-db-sync-g9mg8\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.347252 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t75c2"] Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.349361 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.354219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.354267 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjrh\" (UniqueName: \"kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.354417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.354450 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.354486 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:02 crc kubenswrapper[4865]: I0216 23:04:02.359426 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t75c2"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.365483 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7qdf4" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.365969 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.366085 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.366913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mt7\" (UniqueName: \"kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7\") pod \"ceilometer-0\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " pod="openstack/ceilometer-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.379718 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-94wth"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.452465 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461118 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjrh\" (UniqueName: \"kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461162 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461255 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461295 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461319 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v965s\" (UniqueName: \"kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.461367 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.462178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.465951 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.466232 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9nfmk" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.483732 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.516611 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.518330 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.534002 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.538546 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-94wth"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.538645 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.539124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.541598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjrh\" (UniqueName: \"kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.558578 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle\") pod \"placement-db-sync-t75c2\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.563862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v965s\" (UniqueName: \"kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.563921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.563961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.576140 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.587164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.606083 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.610038 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.622319 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v965s\" (UniqueName: \"kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s\") pod \"barbican-db-sync-94wth\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.630789 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" event={"ID":"d48ac9fe-7ab8-4dea-adcc-92447e575076","Type":"ContainerStarted","Data":"e9f09b0b3de8fdb0ab95a86c9f603cfea5beaa2631946168091dc94774f6c73b"} Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.630978 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="dnsmasq-dns" containerID="cri-o://e9f09b0b3de8fdb0ab95a86c9f603cfea5beaa2631946168091dc94774f6c73b" gracePeriod=10 Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.631347 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.642672 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667258 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztnlf\" (UniqueName: \"kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667625 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667725 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6l29\" (UniqueName: \"kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.667997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.668050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.668165 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.668191 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.669124 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.672679 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zqq74"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.674807 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.675351 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.676824 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.696983 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.697189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2nb8s" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.697631 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.697807 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.697892 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.697933 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wch75" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.698037 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.698098 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.711437 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zqq74"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.744689 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.766424 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.767942 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771077 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771112 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8p6\" (UniqueName: \"kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771368 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771401 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztnlf\" (UniqueName: \"kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771438 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4t7b\" (UniqueName: \"kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771494 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771523 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771569 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771604 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6l29\" (UniqueName: \"kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.771640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.772557 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.772675 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.773014 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.773092 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.774431 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.775228 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.775375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.775442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.776136 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.780652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.780952 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.800519 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.805983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6l29\" (UniqueName: \"kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29\") pod \"dnsmasq-dns-8b5c85b87-x9s4f\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.809499 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.813757 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztnlf\" (UniqueName: \"kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf\") pod \"horizon-7df6bbb45-chfb9\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.817424 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" podStartSLOduration=3.8174071400000003 podStartE2EDuration="3.81740714s" podCreationTimestamp="2026-02-16 23:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:02.696539062 +0000 UTC m=+1083.020246023" watchObservedRunningTime="2026-02-16 23:04:02.81740714 +0000 UTC m=+1083.141114101" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8p6\" (UniqueName: \"kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874174 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874232 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874315 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874342 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4t7b\" (UniqueName: \"kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874389 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874466 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874503 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874563 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874636 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dsm\" (UniqueName: \"kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874736 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.874774 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.875006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.875786 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.876875 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.880220 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.888397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.888786 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.888875 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.894490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.898099 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8p6\" (UniqueName: \"kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.898337 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.899551 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.902969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4t7b\" (UniqueName: \"kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b\") pod \"neutron-db-sync-zqq74\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.911602 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977767 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977801 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977840 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977885 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.977930 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dsm\" (UniqueName: \"kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.978683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.979449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.983559 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.983636 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.986958 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.989831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.989990 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:02.997456 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.007779 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dsm\" (UniqueName: \"kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.032364 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.044471 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.053092 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.065295 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.114198 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.610413 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8nr2"] Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.652597 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" event={"ID":"d48ac9fe-7ab8-4dea-adcc-92447e575076","Type":"ContainerDied","Data":"e9f09b0b3de8fdb0ab95a86c9f603cfea5beaa2631946168091dc94774f6c73b"} Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.652314 4865 generic.go:334] "Generic (PLEG): container finished" podID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerID="e9f09b0b3de8fdb0ab95a86c9f603cfea5beaa2631946168091dc94774f6c73b" exitCode=0 Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.653394 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" event={"ID":"d48ac9fe-7ab8-4dea-adcc-92447e575076","Type":"ContainerDied","Data":"fca5048b4261963aa30d8dbe48dca3e69d76ed36227fb1a28db28c39f90879cd"} Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.653425 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca5048b4261963aa30d8dbe48dca3e69d76ed36227fb1a28db28c39f90879cd" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.657124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8nr2" event={"ID":"009885c7-6b9e-4c12-8cb2-bf2e660fa552","Type":"ContainerStarted","Data":"5905ad4870b0a0b9a03660ef449f51dae93c664b9bd4a09200fb38b9f591fab5"} Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.667213 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.799883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.800230 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.800269 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvp22\" (UniqueName: \"kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.800341 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.800445 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.800471 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb\") pod \"d48ac9fe-7ab8-4dea-adcc-92447e575076\" (UID: \"d48ac9fe-7ab8-4dea-adcc-92447e575076\") " Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.805176 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22" (OuterVolumeSpecName: "kube-api-access-mvp22") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "kube-api-access-mvp22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.856562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.857223 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.860546 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.867444 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.868177 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config" (OuterVolumeSpecName: "config") pod "d48ac9fe-7ab8-4dea-adcc-92447e575076" (UID: "d48ac9fe-7ab8-4dea-adcc-92447e575076"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906616 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906653 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvp22\" (UniqueName: \"kubernetes.io/projected/d48ac9fe-7ab8-4dea-adcc-92447e575076-kube-api-access-mvp22\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906667 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906678 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906686 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:03 crc kubenswrapper[4865]: I0216 23:04:03.906697 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48ac9fe-7ab8-4dea-adcc-92447e575076-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.000871 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.026165 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zqq74"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.040608 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t75c2"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.052778 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:04 crc kubenswrapper[4865]: W0216 23:04:04.071208 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod058418b5_a677_4e8f_a37f_6dff4198f824.slice/crio-36544b7d0c34cb97b08416d2279ef853ef02a2519e1c3828b72deb816559f198 WatchSource:0}: Error finding container 36544b7d0c34cb97b08416d2279ef853ef02a2519e1c3828b72deb816559f198: Status 404 returned error can't find the container with id 36544b7d0c34cb97b08416d2279ef853ef02a2519e1c3828b72deb816559f198 Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.087708 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.101007 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-94wth"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.121341 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.129388 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-g9mg8"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.138766 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:04 crc kubenswrapper[4865]: W0216 23:04:04.160544 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85cc071c_6522_4436_8ca0_45282b3438bb.slice/crio-e8c8cf307df194ea2436f1b9c66e4454d1a301a55d5fc9024e4b35b90a4b1a05 WatchSource:0}: Error finding container e8c8cf307df194ea2436f1b9c66e4454d1a301a55d5fc9024e4b35b90a4b1a05: Status 404 returned error can't find the container with id e8c8cf307df194ea2436f1b9c66e4454d1a301a55d5fc9024e4b35b90a4b1a05 Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.163142 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:04:04 crc kubenswrapper[4865]: W0216 23:04:04.195431 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e13675_3a58_42c2_9236_eab676096763.slice/crio-12a6e0b7f8b4b7e080ab29deb77180837ee2f2d809531019a6498ceab4c333c3 WatchSource:0}: Error finding container 12a6e0b7f8b4b7e080ab29deb77180837ee2f2d809531019a6498ceab4c333c3: Status 404 returned error can't find the container with id 12a6e0b7f8b4b7e080ab29deb77180837ee2f2d809531019a6498ceab4c333c3 Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.675022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerStarted","Data":"12a6e0b7f8b4b7e080ab29deb77180837ee2f2d809531019a6498ceab4c333c3"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.682089 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64454bf745-w42tv" event={"ID":"058418b5-a677-4e8f-a37f-6dff4198f824","Type":"ContainerStarted","Data":"36544b7d0c34cb97b08416d2279ef853ef02a2519e1c3828b72deb816559f198"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.687439 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t75c2" event={"ID":"c180ab1a-1202-492c-ab2e-57c2232d8b64","Type":"ContainerStarted","Data":"536d639ba8777a6601b02934fc6f3471729975456267932158c16dd1fe2e1b5e"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.691436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8nr2" event={"ID":"009885c7-6b9e-4c12-8cb2-bf2e660fa552","Type":"ContainerStarted","Data":"5d9059253f4544b49c6fb0f695b380e1cce4e73273b4bf3d2eb40e7d7c17dbce"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.696737 4865 generic.go:334] "Generic (PLEG): container finished" podID="52aa8cd0-148e-4a49-8741-4c1b1f13b940" containerID="0c17423711438834da2b26192b195ac138d4d10d233e343e6662a3d6a1d30e8e" exitCode=0 Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.696806 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" event={"ID":"52aa8cd0-148e-4a49-8741-4c1b1f13b940","Type":"ContainerDied","Data":"0c17423711438834da2b26192b195ac138d4d10d233e343e6662a3d6a1d30e8e"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.696834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" event={"ID":"52aa8cd0-148e-4a49-8741-4c1b1f13b940","Type":"ContainerStarted","Data":"d1bc692a551163187dd9495889ac9324663467a621b5c11551cf6c5656e9092e"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.707122 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerStarted","Data":"e8c8cf307df194ea2436f1b9c66e4454d1a301a55d5fc9024e4b35b90a4b1a05"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.726368 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p8nr2" podStartSLOduration=3.72634717 podStartE2EDuration="3.72634717s" podCreationTimestamp="2026-02-16 23:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:04.71571628 +0000 UTC m=+1085.039423241" watchObservedRunningTime="2026-02-16 23:04:04.72634717 +0000 UTC m=+1085.050054141" Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.749785 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec86efb1-7717-4690-8027-22c32dbf537d" containerID="c0195ae75841604dab0267726a00fe1630550e1f2a8c533a223ce753953311bc" exitCode=0 Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.751331 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" event={"ID":"ec86efb1-7717-4690-8027-22c32dbf537d","Type":"ContainerDied","Data":"c0195ae75841604dab0267726a00fe1630550e1f2a8c533a223ce753953311bc"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.751500 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" event={"ID":"ec86efb1-7717-4690-8027-22c32dbf537d","Type":"ContainerStarted","Data":"d9288d446020957b32d06bf41ddc8da0f651457ae11307aac56822a3c99c291f"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.789941 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g9mg8" event={"ID":"c43aca4e-9612-43a8-8af2-5f32e4378af7","Type":"ContainerStarted","Data":"b54137dd2f0dbd134f97455d4059b9181371c6beffdd10a40ae4a00613c970c8"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.796082 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerStarted","Data":"0116c4f146be1dc6f311b6b2877c68df33b33b6134b395049455d8efbb00e4fa"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.839339 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.841752 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zqq74" event={"ID":"a6e46137-244f-44c7-ac8d-450c4e8e2fff","Type":"ContainerStarted","Data":"32a78a1a3f9db9c4a052781a3e0e4b04aa7ed029f695352d2b528b1e59891fa5"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.841795 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zqq74" event={"ID":"a6e46137-244f-44c7-ac8d-450c4e8e2fff","Type":"ContainerStarted","Data":"015a24ca021dd2c4d52fe27d377d2dcd707aec9497a3990946864b019de485d6"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.859462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-94wth" event={"ID":"68d6bce0-a0b1-485b-b3fc-6c47cd966129","Type":"ContainerStarted","Data":"0ac924b55bded256308bdb1353b4ef1c487446512ba224d9c1f0c5d5431d968e"} Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.859429 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-rlxqr" Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.941495 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.952205 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zqq74" podStartSLOduration=2.9521847770000003 podStartE2EDuration="2.952184777s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:04.868802066 +0000 UTC m=+1085.192509027" watchObservedRunningTime="2026-02-16 23:04:04.952184777 +0000 UTC m=+1085.275891738" Feb 16 23:04:04 crc kubenswrapper[4865]: I0216 23:04:04.997792 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:04:05 crc kubenswrapper[4865]: E0216 23:04:04.998175 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="init" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:04.998191 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="init" Feb 16 23:04:05 crc kubenswrapper[4865]: E0216 23:04:04.998203 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="dnsmasq-dns" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:04.998209 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="dnsmasq-dns" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:04.998402 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" containerName="dnsmasq-dns" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:04.999557 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.018322 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.056359 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.067556 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxb5k\" (UniqueName: \"kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.067627 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.067651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.067681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.067710 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.094114 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-rlxqr"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.113734 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.168994 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxb5k\" (UniqueName: \"kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.169060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.169085 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.169104 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.169132 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.172034 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.172374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.172750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.178710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.181377 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.190729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxb5k\" (UniqueName: \"kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k\") pod \"horizon-5f446879fc-6xxxh\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.199508 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.319026 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.326684 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372004 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372073 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372422 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372553 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgng9\" (UniqueName: \"kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.372609 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config\") pod \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\" (UID: \"52aa8cd0-148e-4a49-8741-4c1b1f13b940\") " Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.379631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9" (OuterVolumeSpecName: "kube-api-access-tgng9") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "kube-api-access-tgng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.400693 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.423633 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.467259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config" (OuterVolumeSpecName: "config") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.470621 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.474801 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.474826 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.474838 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgng9\" (UniqueName: \"kubernetes.io/projected/52aa8cd0-148e-4a49-8741-4c1b1f13b940-kube-api-access-tgng9\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.474848 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.474858 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.483494 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52aa8cd0-148e-4a49-8741-4c1b1f13b940" (UID: "52aa8cd0-148e-4a49-8741-4c1b1f13b940"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.581415 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52aa8cd0-148e-4a49-8741-4c1b1f13b940-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.873007 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerStarted","Data":"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a"} Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.891150 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" event={"ID":"ec86efb1-7717-4690-8027-22c32dbf537d","Type":"ContainerStarted","Data":"83995ed9fe223b2ca1973aa15dfdff36fbfaf13dc1e94d5b313edaa76afacd5d"} Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.891312 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.898691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerStarted","Data":"1a719480495ecad07605de0cd5e26727e0fd310087d55984e9ccfca548bad4f6"} Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.909244 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.910580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2" event={"ID":"52aa8cd0-148e-4a49-8741-4c1b1f13b940","Type":"ContainerDied","Data":"d1bc692a551163187dd9495889ac9324663467a621b5c11551cf6c5656e9092e"} Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.910628 4865 scope.go:117] "RemoveContainer" containerID="0c17423711438834da2b26192b195ac138d4d10d233e343e6662a3d6a1d30e8e" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.922896 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.930106 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" podStartSLOduration=3.930088778 podStartE2EDuration="3.930088778s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:05.923005429 +0000 UTC m=+1086.246712400" watchObservedRunningTime="2026-02-16 23:04:05.930088778 +0000 UTC m=+1086.253795729" Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.981593 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:05 crc kubenswrapper[4865]: I0216 23:04:05.991268 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-bnhc2"] Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.430097 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52aa8cd0-148e-4a49-8741-4c1b1f13b940" path="/var/lib/kubelet/pods/52aa8cd0-148e-4a49-8741-4c1b1f13b940/volumes" Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.430916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48ac9fe-7ab8-4dea-adcc-92447e575076" path="/var/lib/kubelet/pods/d48ac9fe-7ab8-4dea-adcc-92447e575076/volumes" Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.918679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerStarted","Data":"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8"} Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.918790 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-log" containerID="cri-o://6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" gracePeriod=30 Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.918858 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-httpd" containerID="cri-o://bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" gracePeriod=30 Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.920381 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerStarted","Data":"7fec2ae4ece50499758b15aa28d390aa7f97ebfaeaf98120d3872d2c60771be0"} Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.923487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerStarted","Data":"7144389fd30208655c18a14f80449c10bc0a17723877ead9264f1674d6b56851"} Feb 16 23:04:06 crc kubenswrapper[4865]: I0216 23:04:06.964443 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.964420119 podStartE2EDuration="4.964420119s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:06.947821571 +0000 UTC m=+1087.271528532" watchObservedRunningTime="2026-02-16 23:04:06.964420119 +0000 UTC m=+1087.288127080" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.743106 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.852974 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853426 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853447 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853607 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5dsm\" (UniqueName: \"kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853630 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.853650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts\") pod \"85cc071c-6522-4436-8ca0-45282b3438bb\" (UID: \"85cc071c-6522-4436-8ca0-45282b3438bb\") " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.854729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.855194 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs" (OuterVolumeSpecName: "logs") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.860028 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.860042 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts" (OuterVolumeSpecName: "scripts") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.878579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm" (OuterVolumeSpecName: "kube-api-access-b5dsm") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "kube-api-access-b5dsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.885544 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.915258 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data" (OuterVolumeSpecName: "config-data") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941110 4865 generic.go:334] "Generic (PLEG): container finished" podID="85cc071c-6522-4436-8ca0-45282b3438bb" containerID="bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" exitCode=0 Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941153 4865 generic.go:334] "Generic (PLEG): container finished" podID="85cc071c-6522-4436-8ca0-45282b3438bb" containerID="6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" exitCode=143 Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941217 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerDied","Data":"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8"} Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerDied","Data":"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a"} Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941267 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85cc071c-6522-4436-8ca0-45282b3438bb","Type":"ContainerDied","Data":"e8c8cf307df194ea2436f1b9c66e4454d1a301a55d5fc9024e4b35b90a4b1a05"} Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941291 4865 scope.go:117] "RemoveContainer" containerID="bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.941446 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.944226 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "85cc071c-6522-4436-8ca0-45282b3438bb" (UID: "85cc071c-6522-4436-8ca0-45282b3438bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.945245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerStarted","Data":"2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48"} Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.945484 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-log" containerID="cri-o://7144389fd30208655c18a14f80449c10bc0a17723877ead9264f1674d6b56851" gracePeriod=30 Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.945722 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-httpd" containerID="cri-o://2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48" gracePeriod=30 Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956495 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956543 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956556 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956598 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956613 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956630 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5dsm\" (UniqueName: \"kubernetes.io/projected/85cc071c-6522-4436-8ca0-45282b3438bb-kube-api-access-b5dsm\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956644 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85cc071c-6522-4436-8ca0-45282b3438bb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.956655 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc071c-6522-4436-8ca0-45282b3438bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.978679 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.978660845 podStartE2EDuration="5.978660845s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:07.967733037 +0000 UTC m=+1088.291440008" watchObservedRunningTime="2026-02-16 23:04:07.978660845 +0000 UTC m=+1088.302367806" Feb 16 23:04:07 crc kubenswrapper[4865]: I0216 23:04:07.985606 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.058994 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:08 crc kubenswrapper[4865]: E0216 23:04:08.264940 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b9e05ca_18a3_432e_858d_bf8e31853609.slice/crio-conmon-2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b9e05ca_18a3_432e_858d_bf8e31853609.slice/crio-2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48.scope\": RecentStats: unable to find data in memory cache]" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.313345 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.343933 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.355987 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:08 crc kubenswrapper[4865]: E0216 23:04:08.356736 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-httpd" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.356759 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-httpd" Feb 16 23:04:08 crc kubenswrapper[4865]: E0216 23:04:08.356781 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-log" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.356792 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-log" Feb 16 23:04:08 crc kubenswrapper[4865]: E0216 23:04:08.356817 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52aa8cd0-148e-4a49-8741-4c1b1f13b940" containerName="init" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.356825 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="52aa8cd0-148e-4a49-8741-4c1b1f13b940" containerName="init" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.357018 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="52aa8cd0-148e-4a49-8741-4c1b1f13b940" containerName="init" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.357037 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-httpd" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.357051 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" containerName="glance-log" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.365330 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.366143 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.367706 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.370437 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.447713 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85cc071c-6522-4436-8ca0-45282b3438bb" path="/var/lib/kubelet/pods/85cc071c-6522-4436-8ca0-45282b3438bb/volumes" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.475922 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.475985 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.476081 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.476109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.476248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.483094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.483187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dc7\" (UniqueName: \"kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.483354 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.584875 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.584921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.584966 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585626 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69dc7\" (UniqueName: \"kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.585982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.587463 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.590892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.591795 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.591943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.602091 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69dc7\" (UniqueName: \"kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.604873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.624654 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.707940 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.956388 4865 generic.go:334] "Generic (PLEG): container finished" podID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerID="2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48" exitCode=0 Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.956423 4865 generic.go:334] "Generic (PLEG): container finished" podID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerID="7144389fd30208655c18a14f80449c10bc0a17723877ead9264f1674d6b56851" exitCode=143 Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.956485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerDied","Data":"2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48"} Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.956538 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerDied","Data":"7144389fd30208655c18a14f80449c10bc0a17723877ead9264f1674d6b56851"} Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.960155 4865 generic.go:334] "Generic (PLEG): container finished" podID="009885c7-6b9e-4c12-8cb2-bf2e660fa552" containerID="5d9059253f4544b49c6fb0f695b380e1cce4e73273b4bf3d2eb40e7d7c17dbce" exitCode=0 Feb 16 23:04:08 crc kubenswrapper[4865]: I0216 23:04:08.960222 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8nr2" event={"ID":"009885c7-6b9e-4c12-8cb2-bf2e660fa552","Type":"ContainerDied","Data":"5d9059253f4544b49c6fb0f695b380e1cce4e73273b4bf3d2eb40e7d7c17dbce"} Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.101237 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.129453 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.131437 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.137004 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.147408 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.241479 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.241550 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.241573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.243831 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxpwf\" (UniqueName: \"kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.244637 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.244934 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.245009 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.258198 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.268496 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.307724 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7ff854866d-9gv97"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.309525 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.332254 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff854866d-9gv97"] Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346705 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346777 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.346801 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxpwf\" (UniqueName: \"kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.347862 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.348322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.348423 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.354090 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.354687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.365207 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxpwf\" (UniqueName: \"kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.372139 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle\") pod \"horizon-5dbb7f8956-m76fk\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.448388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-config-data\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.448685 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-secret-key\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.448748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-combined-ca-bundle\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.448768 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-scripts\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.448881 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-tls-certs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.449086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-logs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.449146 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74bz\" (UniqueName: \"kubernetes.io/projected/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-kube-api-access-t74bz\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.481351 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-logs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74bz\" (UniqueName: \"kubernetes.io/projected/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-kube-api-access-t74bz\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551772 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-config-data\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551845 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-secret-key\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-combined-ca-bundle\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-scripts\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551964 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-tls-certs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.551983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-logs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.553612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-scripts\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.554333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-config-data\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.557009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-tls-certs\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.557279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-combined-ca-bundle\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.559783 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-horizon-secret-key\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.568321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74bz\" (UniqueName: \"kubernetes.io/projected/17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a-kube-api-access-t74bz\") pod \"horizon-7ff854866d-9gv97\" (UID: \"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a\") " pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:11 crc kubenswrapper[4865]: I0216 23:04:11.628248 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:13 crc kubenswrapper[4865]: I0216 23:04:13.000484 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:13 crc kubenswrapper[4865]: I0216 23:04:13.066641 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:04:13 crc kubenswrapper[4865]: I0216 23:04:13.066863 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" containerID="cri-o://723cf77b5d28b547b58bce73488a13ab6681ba903e51af345e10635b9efe79c0" gracePeriod=10 Feb 16 23:04:14 crc kubenswrapper[4865]: I0216 23:04:14.019203 4865 generic.go:334] "Generic (PLEG): container finished" podID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerID="723cf77b5d28b547b58bce73488a13ab6681ba903e51af345e10635b9efe79c0" exitCode=0 Feb 16 23:04:14 crc kubenswrapper[4865]: I0216 23:04:14.019257 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" event={"ID":"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a","Type":"ContainerDied","Data":"723cf77b5d28b547b58bce73488a13ab6681ba903e51af345e10635b9efe79c0"} Feb 16 23:04:15 crc kubenswrapper[4865]: I0216 23:04:15.664526 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:04:15 crc kubenswrapper[4865]: I0216 23:04:15.664774 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:04:16 crc kubenswrapper[4865]: I0216 23:04:16.710826 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.092022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8nr2" event={"ID":"009885c7-6b9e-4c12-8cb2-bf2e660fa552","Type":"ContainerDied","Data":"5905ad4870b0a0b9a03660ef449f51dae93c664b9bd4a09200fb38b9f591fab5"} Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.092604 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5905ad4870b0a0b9a03660ef449f51dae93c664b9bd4a09200fb38b9f591fab5" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.139346 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.250688 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.250777 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.251013 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.251132 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.251189 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.251279 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwmdr\" (UniqueName: \"kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr\") pod \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\" (UID: \"009885c7-6b9e-4c12-8cb2-bf2e660fa552\") " Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.258777 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.259048 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr" (OuterVolumeSpecName: "kube-api-access-dwmdr") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "kube-api-access-dwmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.261927 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.270403 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts" (OuterVolumeSpecName: "scripts") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.290031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data" (OuterVolumeSpecName: "config-data") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.291487 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "009885c7-6b9e-4c12-8cb2-bf2e660fa552" (UID: "009885c7-6b9e-4c12-8cb2-bf2e660fa552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353830 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353877 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353889 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353905 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwmdr\" (UniqueName: \"kubernetes.io/projected/009885c7-6b9e-4c12-8cb2-bf2e660fa552-kube-api-access-dwmdr\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353919 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:21 crc kubenswrapper[4865]: I0216 23:04:21.353929 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009885c7-6b9e-4c12-8cb2-bf2e660fa552-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.100787 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8nr2" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.285728 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p8nr2"] Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.295112 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p8nr2"] Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.358805 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ww67n"] Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.359158 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009885c7-6b9e-4c12-8cb2-bf2e660fa552" containerName="keystone-bootstrap" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.359174 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="009885c7-6b9e-4c12-8cb2-bf2e660fa552" containerName="keystone-bootstrap" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.359361 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="009885c7-6b9e-4c12-8cb2-bf2e660fa552" containerName="keystone-bootstrap" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.359925 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.362110 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.362463 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.362641 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.362947 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sj5dj" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.374624 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.397196 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ww67n"] Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.432078 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009885c7-6b9e-4c12-8cb2-bf2e660fa552" path="/var/lib/kubelet/pods/009885c7-6b9e-4c12-8cb2-bf2e660fa552/volumes" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.474697 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvg8x\" (UniqueName: \"kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.474781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.474805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.475409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.475464 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.475530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.577591 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.577782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.577861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvg8x\" (UniqueName: \"kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.577961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.577993 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.578031 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.582852 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.583634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.583993 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.585121 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.586149 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.596027 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvg8x\" (UniqueName: \"kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x\") pod \"keystone-bootstrap-ww67n\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: I0216 23:04:22.682798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.907384 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.907798 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sjrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-t75c2_openstack(c180ab1a-1202-492c-ab2e-57c2232d8b64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.909369 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-t75c2" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.935798 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.936002 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bbh556h664h64fh674h5bfh7dh5h686h8dhdfh54h664h5c7h597h676h578h55dh64fh84h67bh56ch676h55ch76h698h7dh667h675h577hbdh5d5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-64454bf745-w42tv_openstack(058418b5-a677-4e8f-a37f-6dff4198f824): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:04:22 crc kubenswrapper[4865]: E0216 23:04:22.938676 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-64454bf745-w42tv" podUID="058418b5-a677-4e8f-a37f-6dff4198f824" Feb 16 23:04:23 crc kubenswrapper[4865]: E0216 23:04:23.113811 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-t75c2" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" Feb 16 23:04:24 crc kubenswrapper[4865]: I0216 23:04:24.118494 4865 generic.go:334] "Generic (PLEG): container finished" podID="a6e46137-244f-44c7-ac8d-450c4e8e2fff" containerID="32a78a1a3f9db9c4a052781a3e0e4b04aa7ed029f695352d2b528b1e59891fa5" exitCode=0 Feb 16 23:04:24 crc kubenswrapper[4865]: I0216 23:04:24.118639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zqq74" event={"ID":"a6e46137-244f-44c7-ac8d-450c4e8e2fff","Type":"ContainerDied","Data":"32a78a1a3f9db9c4a052781a3e0e4b04aa7ed029f695352d2b528b1e59891fa5"} Feb 16 23:04:26 crc kubenswrapper[4865]: I0216 23:04:26.745350 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.194954 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" event={"ID":"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a","Type":"ContainerDied","Data":"de1e22a2dc252adf49e4e9307030567673c3248fb9a352dc0541ac7098627010"} Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.195411 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1e22a2dc252adf49e4e9307030567673c3248fb9a352dc0541ac7098627010" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.196799 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b9e05ca-18a3-432e-858d-bf8e31853609","Type":"ContainerDied","Data":"1a719480495ecad07605de0cd5e26727e0fd310087d55984e9ccfca548bad4f6"} Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.196820 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a719480495ecad07605de0cd5e26727e0fd310087d55984e9ccfca548bad4f6" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.263568 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.268968 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356110 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc\") pod \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356240 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb\") pod \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356273 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rb47\" (UniqueName: \"kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47\") pod \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356408 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8p6\" (UniqueName: \"kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356447 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356504 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356551 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb\") pod \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356585 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356640 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356710 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356740 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data\") pod \"6b9e05ca-18a3-432e-858d-bf8e31853609\" (UID: \"6b9e05ca-18a3-432e-858d-bf8e31853609\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.356807 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config\") pod \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\" (UID: \"c8dcb5fa-0ce4-4337-99f4-71317a6cb50a\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.358525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs" (OuterVolumeSpecName: "logs") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.358731 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.363380 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.363844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47" (OuterVolumeSpecName: "kube-api-access-8rb47") pod "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" (UID: "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a"). InnerVolumeSpecName "kube-api-access-8rb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.365655 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts" (OuterVolumeSpecName: "scripts") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.367727 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6" (OuterVolumeSpecName: "kube-api-access-fh8p6") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "kube-api-access-fh8p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.405155 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.420588 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config" (OuterVolumeSpecName: "config") pod "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" (UID: "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.424528 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" (UID: "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.428566 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" (UID: "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.433429 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.446097 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data" (OuterVolumeSpecName: "config-data") pod "6b9e05ca-18a3-432e-858d-bf8e31853609" (UID: "6b9e05ca-18a3-432e-858d-bf8e31853609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.454337 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" (UID: "c8dcb5fa-0ce4-4337-99f4-71317a6cb50a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459643 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8p6\" (UniqueName: \"kubernetes.io/projected/6b9e05ca-18a3-432e-858d-bf8e31853609-kube-api-access-fh8p6\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459679 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459692 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b9e05ca-18a3-432e-858d-bf8e31853609-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459707 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459739 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459752 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459765 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459777 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459790 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459800 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b9e05ca-18a3-432e-858d-bf8e31853609-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459810 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459821 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.459832 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rb47\" (UniqueName: \"kubernetes.io/projected/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a-kube-api-access-8rb47\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.484398 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.561318 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.747740 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.747853 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:04:31 crc kubenswrapper[4865]: E0216 23:04:31.776965 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 23:04:31 crc kubenswrapper[4865]: E0216 23:04:31.777150 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v965s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-94wth_openstack(68d6bce0-a0b1-485b-b3fc-6c47cd966129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:04:31 crc kubenswrapper[4865]: E0216 23:04:31.778356 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-94wth" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.782426 4865 scope.go:117] "RemoveContainer" containerID="6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.805952 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.818824 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.967539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config\") pod \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.967701 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4t7b\" (UniqueName: \"kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b\") pod \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.967919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key\") pod \"058418b5-a677-4e8f-a37f-6dff4198f824\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.967942 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zp6\" (UniqueName: \"kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6\") pod \"058418b5-a677-4e8f-a37f-6dff4198f824\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.968059 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle\") pod \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\" (UID: \"a6e46137-244f-44c7-ac8d-450c4e8e2fff\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.968080 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs\") pod \"058418b5-a677-4e8f-a37f-6dff4198f824\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.968104 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data\") pod \"058418b5-a677-4e8f-a37f-6dff4198f824\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.968123 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts\") pod \"058418b5-a677-4e8f-a37f-6dff4198f824\" (UID: \"058418b5-a677-4e8f-a37f-6dff4198f824\") " Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.968883 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts" (OuterVolumeSpecName: "scripts") pod "058418b5-a677-4e8f-a37f-6dff4198f824" (UID: "058418b5-a677-4e8f-a37f-6dff4198f824"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.969643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs" (OuterVolumeSpecName: "logs") pod "058418b5-a677-4e8f-a37f-6dff4198f824" (UID: "058418b5-a677-4e8f-a37f-6dff4198f824"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.969967 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data" (OuterVolumeSpecName: "config-data") pod "058418b5-a677-4e8f-a37f-6dff4198f824" (UID: "058418b5-a677-4e8f-a37f-6dff4198f824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.972597 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b" (OuterVolumeSpecName: "kube-api-access-x4t7b") pod "a6e46137-244f-44c7-ac8d-450c4e8e2fff" (UID: "a6e46137-244f-44c7-ac8d-450c4e8e2fff"). InnerVolumeSpecName "kube-api-access-x4t7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.974837 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6" (OuterVolumeSpecName: "kube-api-access-l9zp6") pod "058418b5-a677-4e8f-a37f-6dff4198f824" (UID: "058418b5-a677-4e8f-a37f-6dff4198f824"). InnerVolumeSpecName "kube-api-access-l9zp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.978562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "058418b5-a677-4e8f-a37f-6dff4198f824" (UID: "058418b5-a677-4e8f-a37f-6dff4198f824"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:31 crc kubenswrapper[4865]: I0216 23:04:31.991511 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6e46137-244f-44c7-ac8d-450c4e8e2fff" (UID: "a6e46137-244f-44c7-ac8d-450c4e8e2fff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.004472 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config" (OuterVolumeSpecName: "config") pod "a6e46137-244f-44c7-ac8d-450c4e8e2fff" (UID: "a6e46137-244f-44c7-ac8d-450c4e8e2fff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070459 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070527 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/058418b5-a677-4e8f-a37f-6dff4198f824-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070544 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070555 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4t7b\" (UniqueName: \"kubernetes.io/projected/a6e46137-244f-44c7-ac8d-450c4e8e2fff-kube-api-access-x4t7b\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070570 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/058418b5-a677-4e8f-a37f-6dff4198f824-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070607 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zp6\" (UniqueName: \"kubernetes.io/projected/058418b5-a677-4e8f-a37f-6dff4198f824-kube-api-access-l9zp6\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070615 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e46137-244f-44c7-ac8d-450c4e8e2fff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.070624 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/058418b5-a677-4e8f-a37f-6dff4198f824-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.217731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64454bf745-w42tv" event={"ID":"058418b5-a677-4e8f-a37f-6dff4198f824","Type":"ContainerDied","Data":"36544b7d0c34cb97b08416d2279ef853ef02a2519e1c3828b72deb816559f198"} Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.217740 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64454bf745-w42tv" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.231271 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-cmrhq" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.231363 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zqq74" event={"ID":"a6e46137-244f-44c7-ac8d-450c4e8e2fff","Type":"ContainerDied","Data":"015a24ca021dd2c4d52fe27d377d2dcd707aec9497a3990946864b019de485d6"} Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.231415 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="015a24ca021dd2c4d52fe27d377d2dcd707aec9497a3990946864b019de485d6" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.231497 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.231496 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zqq74" Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.234156 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-94wth" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.291467 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.311975 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-cmrhq"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.337361 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.347720 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64454bf745-w42tv"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.358966 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.369329 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.375708 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.376323 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="init" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376348 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="init" Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.376375 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-log" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376387 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-log" Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.376416 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e46137-244f-44c7-ac8d-450c4e8e2fff" containerName="neutron-db-sync" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376428 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e46137-244f-44c7-ac8d-450c4e8e2fff" containerName="neutron-db-sync" Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.376451 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-httpd" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376462 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-httpd" Feb 16 23:04:32 crc kubenswrapper[4865]: E0216 23:04:32.376499 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376510 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376807 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-httpd" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376842 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e46137-244f-44c7-ac8d-450c4e8e2fff" containerName="neutron-db-sync" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376859 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" containerName="dnsmasq-dns" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.376874 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" containerName="glance-log" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.378467 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.382884 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.388803 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.389037 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.436745 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058418b5-a677-4e8f-a37f-6dff4198f824" path="/var/lib/kubelet/pods/058418b5-a677-4e8f-a37f-6dff4198f824/volumes" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.437383 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9e05ca-18a3-432e-858d-bf8e31853609" path="/var/lib/kubelet/pods/6b9e05ca-18a3-432e-858d-bf8e31853609/volumes" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.438271 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8dcb5fa-0ce4-4337-99f4-71317a6cb50a" path="/var/lib/kubelet/pods/c8dcb5fa-0ce4-4337-99f4-71317a6cb50a/volumes" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478479 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478560 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbs5z\" (UniqueName: \"kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478586 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478610 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478821 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.478959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.479031 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580472 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580566 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580603 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580627 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbs5z\" (UniqueName: \"kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580732 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.580769 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.581119 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.581294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.582379 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.585174 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.588757 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.592773 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.599396 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.604484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbs5z\" (UniqueName: \"kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.620852 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " pod="openstack/glance-default-external-api-0" Feb 16 23:04:32 crc kubenswrapper[4865]: I0216 23:04:32.727742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.079760 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.082188 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.114925 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198447 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198520 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26q5\" (UniqueName: \"kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198541 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198581 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198606 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.198642 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.210418 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.221238 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.232661 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-2nb8s" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.232977 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.233662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.233870 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.233976 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.287880 4865 scope.go:117] "RemoveContainer" containerID="bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" Feb 16 23:04:33 crc kubenswrapper[4865]: E0216 23:04:33.298134 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8\": container with ID starting with bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8 not found: ID does not exist" containerID="bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.298173 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8"} err="failed to get container status \"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8\": rpc error: code = NotFound desc = could not find container \"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8\": container with ID starting with bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8 not found: ID does not exist" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.298204 4865 scope.go:117] "RemoveContainer" containerID="6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300221 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300257 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26q5\" (UniqueName: \"kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300290 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300749 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d274d\" (UniqueName: \"kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300891 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.300933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.301026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.301202 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.301535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.301830 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.302040 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: E0216 23:04:33.303772 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a\": container with ID starting with 6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a not found: ID does not exist" containerID="6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.303815 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a"} err="failed to get container status \"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a\": rpc error: code = NotFound desc = could not find container \"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a\": container with ID starting with 6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a not found: ID does not exist" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.303845 4865 scope.go:117] "RemoveContainer" containerID="bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.305675 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8"} err="failed to get container status \"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8\": rpc error: code = NotFound desc = could not find container \"bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8\": container with ID starting with bb0e7b74cb33f07fe3ebc3e3afef3508d2d43c3f5bc9e9bf86744bbe239b1ec8 not found: ID does not exist" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.305712 4865 scope.go:117] "RemoveContainer" containerID="6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.306058 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a"} err="failed to get container status \"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a\": rpc error: code = NotFound desc = could not find container \"6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a\": container with ID starting with 6a76488056e06b9585db1b4ac09f7763179363799267413a639287e62b3ea98a not found: ID does not exist" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.307914 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.328110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26q5\" (UniqueName: \"kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5\") pod \"dnsmasq-dns-84b966f6c9-ljkjc\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: E0216 23:04:33.360671 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 23:04:33 crc kubenswrapper[4865]: E0216 23:04:33.360793 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xtks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-g9mg8_openstack(c43aca4e-9612-43a8-8af2-5f32e4378af7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:04:33 crc kubenswrapper[4865]: E0216 23:04:33.362851 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-g9mg8" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.403426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d274d\" (UniqueName: \"kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.403501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.403541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.403558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.403577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.408646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.409756 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.411400 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.414359 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.423787 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.436677 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d274d\" (UniqueName: \"kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d\") pod \"neutron-84797ccfbd-g57dm\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.460474 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:33 crc kubenswrapper[4865]: I0216 23:04:33.820719 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.100847 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7ff854866d-9gv97"] Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.198519 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:34 crc kubenswrapper[4865]: W0216 23:04:34.226893 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d0a70da_4482_4ab9_8503_c324267212fa.slice/crio-db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb WatchSource:0}: Error finding container db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb: Status 404 returned error can't find the container with id db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.262431 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ww67n"] Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.265312 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerStarted","Data":"04aed157248f61a7a8beeaa92b88351bcb0d08da029c6bdb89e6b2cebd23fad9"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.275347 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.275815 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerStarted","Data":"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.275896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerStarted","Data":"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.276067 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7df6bbb45-chfb9" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon-log" containerID="cri-o://aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91" gracePeriod=30 Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.276491 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7df6bbb45-chfb9" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon" containerID="cri-o://b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7" gracePeriod=30 Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.280031 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerStarted","Data":"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.280073 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerStarted","Data":"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.280183 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f446879fc-6xxxh" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon-log" containerID="cri-o://d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" gracePeriod=30 Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.280261 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f446879fc-6xxxh" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon" containerID="cri-o://7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" gracePeriod=30 Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.299497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerStarted","Data":"064f0132ad4c96ea4d5335a09252adf5f3ac0357be43b18bac231dc13a0fed5b"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.301263 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff854866d-9gv97" event={"ID":"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a","Type":"ContainerStarted","Data":"71aeea39564eba31ad260b53f8a5f9567e4e77a851e444f5d17a015670ffb194"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.303421 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerStarted","Data":"5fc273bbb3c9c3f7431d58f42273042ec637806a09beb179e3b7c7fb8231c767"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.303475 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerStarted","Data":"a4a9dab95832c674ae1c91886d10a39340070e56b6f0df0a3e9c6f066169d932"} Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.304108 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7df6bbb45-chfb9" podStartSLOduration=4.732181108 podStartE2EDuration="32.304090743s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="2026-02-16 23:04:04.214244081 +0000 UTC m=+1084.537951042" lastFinishedPulling="2026-02-16 23:04:31.786153706 +0000 UTC m=+1112.109860677" observedRunningTime="2026-02-16 23:04:34.296465226 +0000 UTC m=+1114.620172177" watchObservedRunningTime="2026-02-16 23:04:34.304090743 +0000 UTC m=+1114.627797704" Feb 16 23:04:34 crc kubenswrapper[4865]: E0216 23:04:34.312221 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-g9mg8" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.352331 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f446879fc-6xxxh" podStartSLOduration=3.000114681 podStartE2EDuration="30.352307292s" podCreationTimestamp="2026-02-16 23:04:04 +0000 UTC" firstStartedPulling="2026-02-16 23:04:05.973343078 +0000 UTC m=+1086.297050039" lastFinishedPulling="2026-02-16 23:04:33.325535689 +0000 UTC m=+1113.649242650" observedRunningTime="2026-02-16 23:04:34.318601725 +0000 UTC m=+1114.642308746" watchObservedRunningTime="2026-02-16 23:04:34.352307292 +0000 UTC m=+1114.676014253" Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.503538 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:04:34 crc kubenswrapper[4865]: I0216 23:04:34.577897 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.324486 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff854866d-9gv97" event={"ID":"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a","Type":"ContainerStarted","Data":"c45bbb3bf11da8698401328923c9c4a8bf49afb4ef604b14494b1d0cb5b616a9"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.324853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7ff854866d-9gv97" event={"ID":"17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a","Type":"ContainerStarted","Data":"693f912c2be76a6b8a779e69e7f216f68b4131e20a9c71f0cb79349e0b850830"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.327973 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.335345 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerStarted","Data":"135488e7c401818eb30cf0f3365f3c5c1fcf7f94baab337cc2e52d7cd7c76d75"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.335392 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerStarted","Data":"ebc1ca05b742248ab2abb9e3cb831650c42f8a4420b33d1fb1cb2e924737d5b9"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.335403 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerStarted","Data":"9993ee6b986709c7d53c29ec26adb2bf1c52141827f25a2335ad19b44d09a2c5"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.335949 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.348115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerStarted","Data":"cf65017e9bfc48772da0e56ae1e2b15f6cbb1e7fa330ac12be3e7dd7acde772e"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.361938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ww67n" event={"ID":"6d0a70da-4482-4ab9-8503-c324267212fa","Type":"ContainerStarted","Data":"4b751e9d2f95652c660fe978c63733e22fb371d03c9aa3da519badc182b53af5"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.362072 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ww67n" event={"ID":"6d0a70da-4482-4ab9-8503-c324267212fa","Type":"ContainerStarted","Data":"db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.366906 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7ff854866d-9gv97" podStartSLOduration=24.366889639 podStartE2EDuration="24.366889639s" podCreationTimestamp="2026-02-16 23:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:35.362502444 +0000 UTC m=+1115.686209405" watchObservedRunningTime="2026-02-16 23:04:35.366889639 +0000 UTC m=+1115.690596600" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.368776 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerStarted","Data":"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.370458 4865 generic.go:334] "Generic (PLEG): container finished" podID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerID="2814a696863ed65e87220948af64bcc32663881cdf6e71652298aea112be109e" exitCode=0 Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.370497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" event={"ID":"fe2a586d-4955-4b3d-8299-d5ea37cfe736","Type":"ContainerDied","Data":"2814a696863ed65e87220948af64bcc32663881cdf6e71652298aea112be109e"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.370513 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" event={"ID":"fe2a586d-4955-4b3d-8299-d5ea37cfe736","Type":"ContainerStarted","Data":"eaf9c894b346fb5f635c2795318a295dab4d7de9b040494812af1f0b64651804"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.373849 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerStarted","Data":"fe4078ad96c214d8c6f52173dba97bd712dac7fd5f98c905b528ed5de0c2e126"} Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.387838 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84797ccfbd-g57dm" podStartSLOduration=2.387823733 podStartE2EDuration="2.387823733s" podCreationTimestamp="2026-02-16 23:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:35.38699362 +0000 UTC m=+1115.710700571" watchObservedRunningTime="2026-02-16 23:04:35.387823733 +0000 UTC m=+1115.711530694" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.436867 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ww67n" podStartSLOduration=13.436848386 podStartE2EDuration="13.436848386s" podCreationTimestamp="2026-02-16 23:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:35.431800752 +0000 UTC m=+1115.755507713" watchObservedRunningTime="2026-02-16 23:04:35.436848386 +0000 UTC m=+1115.760555337" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.468068 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dbb7f8956-m76fk" podStartSLOduration=24.468050282 podStartE2EDuration="24.468050282s" podCreationTimestamp="2026-02-16 23:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:35.454739014 +0000 UTC m=+1115.778445975" watchObservedRunningTime="2026-02-16 23:04:35.468050282 +0000 UTC m=+1115.791757243" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.834169 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.835924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.845740 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.848153 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.854321 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.968043 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.968403 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.968527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.968599 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.968703 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjt2j\" (UniqueName: \"kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.969209 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:35 crc kubenswrapper[4865]: I0216 23:04:35.969304 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.077892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.077954 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.078073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.078092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.078578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjt2j\" (UniqueName: \"kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.078657 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.079029 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.087081 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.088901 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.089186 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.089480 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.089560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.089666 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.100314 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjt2j\" (UniqueName: \"kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j\") pod \"neutron-854576c7c7-t47q8\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.150871 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.465116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerStarted","Data":"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f"} Feb 16 23:04:36 crc kubenswrapper[4865]: W0216 23:04:36.973088 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a3ce49_d4ac_4627_b5cb_65305d115cef.slice/crio-1c3bb27ff482055607f8ee9914c6c17d930f8ef1fef10dc4a4d2f34a778b830e WatchSource:0}: Error finding container 1c3bb27ff482055607f8ee9914c6c17d930f8ef1fef10dc4a4d2f34a778b830e: Status 404 returned error can't find the container with id 1c3bb27ff482055607f8ee9914c6c17d930f8ef1fef10dc4a4d2f34a778b830e Feb 16 23:04:36 crc kubenswrapper[4865]: I0216 23:04:36.977220 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.483755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerStarted","Data":"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.498030 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerStarted","Data":"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.498295 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-log" containerID="cri-o://7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" gracePeriod=30 Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.498424 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-httpd" containerID="cri-o://334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" gracePeriod=30 Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.525960 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.525945882 podStartE2EDuration="5.525945882s" podCreationTimestamp="2026-02-16 23:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:37.510511884 +0000 UTC m=+1117.834218845" watchObservedRunningTime="2026-02-16 23:04:37.525945882 +0000 UTC m=+1117.849652843" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.530040 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" event={"ID":"fe2a586d-4955-4b3d-8299-d5ea37cfe736","Type":"ContainerStarted","Data":"dcfde899eb7c86d6c47684aa3e3841f201dc92ccc23dbc31377bd673066f16a0"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.530309 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.535451 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t75c2" event={"ID":"c180ab1a-1202-492c-ab2e-57c2232d8b64","Type":"ContainerStarted","Data":"e1a190f3fe17f7742d386d08d221ee5860fa36adae67f45314b13277ccb337a3"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.547646 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.547614298 podStartE2EDuration="29.547614298s" podCreationTimestamp="2026-02-16 23:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:37.534486155 +0000 UTC m=+1117.858193136" watchObservedRunningTime="2026-02-16 23:04:37.547614298 +0000 UTC m=+1117.871321249" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.554909 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerStarted","Data":"55b1a0af898b5085dcca9774e9ba2e9a3a19e4d58b5df88f7c56cc3f8cfefae2"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.567259 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerStarted","Data":"dd461fd86f89bf6d7bbdc9c6cc260d2415b6afd1ae5495e82170f62b2618bbae"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.569185 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerStarted","Data":"1c3bb27ff482055607f8ee9914c6c17d930f8ef1fef10dc4a4d2f34a778b830e"} Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.571355 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.568743 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" podStartSLOduration=4.568727807 podStartE2EDuration="4.568727807s" podCreationTimestamp="2026-02-16 23:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:37.565944268 +0000 UTC m=+1117.889651229" watchObservedRunningTime="2026-02-16 23:04:37.568727807 +0000 UTC m=+1117.892434768" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.586536 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t75c2" podStartSLOduration=2.506656791 podStartE2EDuration="35.586515953s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="2026-02-16 23:04:04.054399415 +0000 UTC m=+1084.378106376" lastFinishedPulling="2026-02-16 23:04:37.134258577 +0000 UTC m=+1117.457965538" observedRunningTime="2026-02-16 23:04:37.584720862 +0000 UTC m=+1117.908427823" watchObservedRunningTime="2026-02-16 23:04:37.586515953 +0000 UTC m=+1117.910222914" Feb 16 23:04:37 crc kubenswrapper[4865]: I0216 23:04:37.631039 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-854576c7c7-t47q8" podStartSLOduration=2.631021847 podStartE2EDuration="2.631021847s" podCreationTimestamp="2026-02-16 23:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:37.629203925 +0000 UTC m=+1117.952910886" watchObservedRunningTime="2026-02-16 23:04:37.631021847 +0000 UTC m=+1117.954728808" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.577188 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerStarted","Data":"479f004d56b8a57c569210823ae832b6c6a76320b35cb2f027d19fb019168dd8"} Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.581485 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.582351 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerID="334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" exitCode=0 Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.582379 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerID="7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" exitCode=143 Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.583341 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerDied","Data":"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac"} Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.583385 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerDied","Data":"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60"} Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.583397 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3d4660e-fca1-47c9-bb64-7ac074d97085","Type":"ContainerDied","Data":"064f0132ad4c96ea4d5335a09252adf5f3ac0357be43b18bac231dc13a0fed5b"} Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.583414 4865 scope.go:117] "RemoveContainer" containerID="334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.642464 4865 scope.go:117] "RemoveContainer" containerID="7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.658853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.658910 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.658939 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.658958 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.659024 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.659068 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.659082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69dc7\" (UniqueName: \"kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.659135 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle\") pod \"b3d4660e-fca1-47c9-bb64-7ac074d97085\" (UID: \"b3d4660e-fca1-47c9-bb64-7ac074d97085\") " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.661538 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.663431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs" (OuterVolumeSpecName: "logs") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.669060 4865 scope.go:117] "RemoveContainer" containerID="334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" Feb 16 23:04:38 crc kubenswrapper[4865]: E0216 23:04:38.669535 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac\": container with ID starting with 334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac not found: ID does not exist" containerID="334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.669563 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac"} err="failed to get container status \"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac\": rpc error: code = NotFound desc = could not find container \"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac\": container with ID starting with 334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac not found: ID does not exist" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.669584 4865 scope.go:117] "RemoveContainer" containerID="7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" Feb 16 23:04:38 crc kubenswrapper[4865]: E0216 23:04:38.669977 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60\": container with ID starting with 7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60 not found: ID does not exist" containerID="7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.669997 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60"} err="failed to get container status \"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60\": rpc error: code = NotFound desc = could not find container \"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60\": container with ID starting with 7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60 not found: ID does not exist" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.670010 4865 scope.go:117] "RemoveContainer" containerID="334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.670402 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac"} err="failed to get container status \"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac\": rpc error: code = NotFound desc = could not find container \"334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac\": container with ID starting with 334e331bac1b4abee6f3b70e8450b0ac7b26384cf9d553d77327e718c740c6ac not found: ID does not exist" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.670419 4865 scope.go:117] "RemoveContainer" containerID="7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.670608 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60"} err="failed to get container status \"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60\": rpc error: code = NotFound desc = could not find container \"7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60\": container with ID starting with 7a8c72aafdda712079deb0dfdfeff72b6622cd5d06602d06bc4df7cfe34acb60 not found: ID does not exist" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.688122 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts" (OuterVolumeSpecName: "scripts") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.688244 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.691815 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7" (OuterVolumeSpecName: "kube-api-access-69dc7") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "kube-api-access-69dc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.714046 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.734984 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data" (OuterVolumeSpecName: "config-data") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.746627 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3d4660e-fca1-47c9-bb64-7ac074d97085" (UID: "b3d4660e-fca1-47c9-bb64-7ac074d97085"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761011 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761039 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761049 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761058 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761084 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761094 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3d4660e-fca1-47c9-bb64-7ac074d97085-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761102 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69dc7\" (UniqueName: \"kubernetes.io/projected/b3d4660e-fca1-47c9-bb64-7ac074d97085-kube-api-access-69dc7\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.761110 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d4660e-fca1-47c9-bb64-7ac074d97085-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.792394 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 16 23:04:38 crc kubenswrapper[4865]: I0216 23:04:38.862766 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.593577 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.630934 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.641713 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.662162 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:39 crc kubenswrapper[4865]: E0216 23:04:39.663772 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-log" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.663790 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-log" Feb 16 23:04:39 crc kubenswrapper[4865]: E0216 23:04:39.663803 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-httpd" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.663811 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-httpd" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.664023 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-httpd" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.664042 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" containerName="glance-log" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.664979 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.674451 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.675577 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.677676 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789493 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789615 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789631 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789682 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789780 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2w2f\" (UniqueName: \"kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.789836 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891509 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891580 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891623 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891639 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891686 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891703 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891741 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2w2f\" (UniqueName: \"kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.891789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.892163 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.892308 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.899854 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.899888 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.901420 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.903870 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.909499 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.918148 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2w2f\" (UniqueName: \"kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.925016 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:04:39 crc kubenswrapper[4865]: I0216 23:04:39.998638 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.431541 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d4660e-fca1-47c9-bb64-7ac074d97085" path="/var/lib/kubelet/pods/b3d4660e-fca1-47c9-bb64-7ac074d97085/volumes" Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.611616 4865 generic.go:334] "Generic (PLEG): container finished" podID="c180ab1a-1202-492c-ab2e-57c2232d8b64" containerID="e1a190f3fe17f7742d386d08d221ee5860fa36adae67f45314b13277ccb337a3" exitCode=0 Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.611692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t75c2" event={"ID":"c180ab1a-1202-492c-ab2e-57c2232d8b64","Type":"ContainerDied","Data":"e1a190f3fe17f7742d386d08d221ee5860fa36adae67f45314b13277ccb337a3"} Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.616353 4865 generic.go:334] "Generic (PLEG): container finished" podID="6d0a70da-4482-4ab9-8503-c324267212fa" containerID="4b751e9d2f95652c660fe978c63733e22fb371d03c9aa3da519badc182b53af5" exitCode=0 Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.616393 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ww67n" event={"ID":"6d0a70da-4482-4ab9-8503-c324267212fa","Type":"ContainerDied","Data":"4b751e9d2f95652c660fe978c63733e22fb371d03c9aa3da519badc182b53af5"} Feb 16 23:04:40 crc kubenswrapper[4865]: I0216 23:04:40.687522 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:04:41 crc kubenswrapper[4865]: I0216 23:04:41.481509 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:41 crc kubenswrapper[4865]: I0216 23:04:41.481815 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:04:41 crc kubenswrapper[4865]: I0216 23:04:41.629128 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:41 crc kubenswrapper[4865]: I0216 23:04:41.629216 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:04:42 crc kubenswrapper[4865]: I0216 23:04:42.729422 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 23:04:42 crc kubenswrapper[4865]: I0216 23:04:42.729728 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 23:04:42 crc kubenswrapper[4865]: I0216 23:04:42.762029 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 23:04:42 crc kubenswrapper[4865]: I0216 23:04:42.802560 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.045654 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.413453 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.478029 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.478266 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="dnsmasq-dns" containerID="cri-o://83995ed9fe223b2ca1973aa15dfdff36fbfaf13dc1e94d5b313edaa76afacd5d" gracePeriod=10 Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.657408 4865 generic.go:334] "Generic (PLEG): container finished" podID="ec86efb1-7717-4690-8027-22c32dbf537d" containerID="83995ed9fe223b2ca1973aa15dfdff36fbfaf13dc1e94d5b313edaa76afacd5d" exitCode=0 Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.658767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" event={"ID":"ec86efb1-7717-4690-8027-22c32dbf537d","Type":"ContainerDied","Data":"83995ed9fe223b2ca1973aa15dfdff36fbfaf13dc1e94d5b313edaa76afacd5d"} Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.658800 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 23:04:43 crc kubenswrapper[4865]: I0216 23:04:43.658896 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 23:04:45 crc kubenswrapper[4865]: I0216 23:04:45.664549 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:04:45 crc kubenswrapper[4865]: I0216 23:04:45.664645 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:04:45 crc kubenswrapper[4865]: I0216 23:04:45.675592 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 23:04:45 crc kubenswrapper[4865]: I0216 23:04:45.675630 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.261509 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.362919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.363169 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.363313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.363342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.363385 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvg8x\" (UniqueName: \"kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.363410 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data\") pod \"6d0a70da-4482-4ab9-8503-c324267212fa\" (UID: \"6d0a70da-4482-4ab9-8503-c324267212fa\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.369604 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts" (OuterVolumeSpecName: "scripts") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.370373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.375956 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.378635 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x" (OuterVolumeSpecName: "kube-api-access-zvg8x") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "kube-api-access-zvg8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.402245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.428877 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.446979 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data" (OuterVolumeSpecName: "config-data") pod "6d0a70da-4482-4ab9-8503-c324267212fa" (UID: "6d0a70da-4482-4ab9-8503-c324267212fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466228 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466292 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466309 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466319 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvg8x\" (UniqueName: \"kubernetes.io/projected/6d0a70da-4482-4ab9-8503-c324267212fa-kube-api-access-zvg8x\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466328 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.466365 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6d0a70da-4482-4ab9-8503-c324267212fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.543937 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.558033 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.567217 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs\") pod \"c180ab1a-1202-492c-ab2e-57c2232d8b64\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.567315 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sjrh\" (UniqueName: \"kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh\") pod \"c180ab1a-1202-492c-ab2e-57c2232d8b64\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.567375 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle\") pod \"c180ab1a-1202-492c-ab2e-57c2232d8b64\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.567448 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts\") pod \"c180ab1a-1202-492c-ab2e-57c2232d8b64\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.567571 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data\") pod \"c180ab1a-1202-492c-ab2e-57c2232d8b64\" (UID: \"c180ab1a-1202-492c-ab2e-57c2232d8b64\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.569404 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs" (OuterVolumeSpecName: "logs") pod "c180ab1a-1202-492c-ab2e-57c2232d8b64" (UID: "c180ab1a-1202-492c-ab2e-57c2232d8b64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.575411 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh" (OuterVolumeSpecName: "kube-api-access-2sjrh") pod "c180ab1a-1202-492c-ab2e-57c2232d8b64" (UID: "c180ab1a-1202-492c-ab2e-57c2232d8b64"). InnerVolumeSpecName "kube-api-access-2sjrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.578132 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts" (OuterVolumeSpecName: "scripts") pod "c180ab1a-1202-492c-ab2e-57c2232d8b64" (UID: "c180ab1a-1202-492c-ab2e-57c2232d8b64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.624247 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c180ab1a-1202-492c-ab2e-57c2232d8b64" (UID: "c180ab1a-1202-492c-ab2e-57c2232d8b64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.627187 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data" (OuterVolumeSpecName: "config-data") pod "c180ab1a-1202-492c-ab2e-57c2232d8b64" (UID: "c180ab1a-1202-492c-ab2e-57c2232d8b64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.669242 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.669741 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.669889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.669919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.669970 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6l29\" (UniqueName: \"kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670002 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670551 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c180ab1a-1202-492c-ab2e-57c2232d8b64-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670567 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sjrh\" (UniqueName: \"kubernetes.io/projected/c180ab1a-1202-492c-ab2e-57c2232d8b64-kube-api-access-2sjrh\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670578 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670589 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.670599 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c180ab1a-1202-492c-ab2e-57c2232d8b64-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.700082 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t75c2" event={"ID":"c180ab1a-1202-492c-ab2e-57c2232d8b64","Type":"ContainerDied","Data":"536d639ba8777a6601b02934fc6f3471729975456267932158c16dd1fe2e1b5e"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.700133 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536d639ba8777a6601b02934fc6f3471729975456267932158c16dd1fe2e1b5e" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.700208 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t75c2" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.700473 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29" (OuterVolumeSpecName: "kube-api-access-k6l29") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "kube-api-access-k6l29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.703562 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerStarted","Data":"e2196354b4e6e6f16642000e6d9319ce47369e316921792089075c1e3a3e9664"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.738718 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerStarted","Data":"1bdf317171916a450298119302d7b50700ec5fb1a920dc1c862a330c6a5d19c8"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.754009 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config" (OuterVolumeSpecName: "config") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.774175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" event={"ID":"ec86efb1-7717-4690-8027-22c32dbf537d","Type":"ContainerDied","Data":"d9288d446020957b32d06bf41ddc8da0f651457ae11307aac56822a3c99c291f"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.774251 4865 scope.go:117] "RemoveContainer" containerID="83995ed9fe223b2ca1973aa15dfdff36fbfaf13dc1e94d5b313edaa76afacd5d" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.774802 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-x9s4f" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.774907 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.775368 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.778078 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") pod \"ec86efb1-7717-4690-8027-22c32dbf537d\" (UID: \"ec86efb1-7717-4690-8027-22c32dbf537d\") " Feb 16 23:04:46 crc kubenswrapper[4865]: W0216 23:04:46.778748 4865 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ec86efb1-7717-4690-8027-22c32dbf537d/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779057 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779486 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6l29\" (UniqueName: \"kubernetes.io/projected/ec86efb1-7717-4690-8027-22c32dbf537d-kube-api-access-k6l29\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779506 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779515 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779524 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.779767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-94wth" event={"ID":"68d6bce0-a0b1-485b-b3fc-6c47cd966129","Type":"ContainerStarted","Data":"8b89cf994607ccbbc6f108879c3934e6c009b9745b40ebbd220533f5c1652bc5"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.782017 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.783316 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ww67n" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.784602 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ww67n" event={"ID":"6d0a70da-4482-4ab9-8503-c324267212fa","Type":"ContainerDied","Data":"db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb"} Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.784681 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3d1f41ea3fb3123473a2023a7ae902c4f983f2460e743c360c3e294d170fcb" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.807362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.813060 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-94wth" podStartSLOduration=2.583702348 podStartE2EDuration="44.81304001s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="2026-02-16 23:04:04.008639355 +0000 UTC m=+1084.332346316" lastFinishedPulling="2026-02-16 23:04:46.237977017 +0000 UTC m=+1126.561683978" observedRunningTime="2026-02-16 23:04:46.799348401 +0000 UTC m=+1127.123055352" watchObservedRunningTime="2026-02-16 23:04:46.81304001 +0000 UTC m=+1127.136746971" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.816386 4865 scope.go:117] "RemoveContainer" containerID="c0195ae75841604dab0267726a00fe1630550e1f2a8c533a223ce753953311bc" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.818932 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec86efb1-7717-4690-8027-22c32dbf537d" (UID: "ec86efb1-7717-4690-8027-22c32dbf537d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.881208 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:46 crc kubenswrapper[4865]: I0216 23:04:46.881249 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec86efb1-7717-4690-8027-22c32dbf537d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.108150 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.118234 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-x9s4f"] Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.375008 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66c88cfbc7-mhfsh"] Feb 16 23:04:47 crc kubenswrapper[4865]: E0216 23:04:47.383469 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" containerName="placement-db-sync" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383499 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" containerName="placement-db-sync" Feb 16 23:04:47 crc kubenswrapper[4865]: E0216 23:04:47.383521 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="init" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383527 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="init" Feb 16 23:04:47 crc kubenswrapper[4865]: E0216 23:04:47.383539 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0a70da-4482-4ab9-8503-c324267212fa" containerName="keystone-bootstrap" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0a70da-4482-4ab9-8503-c324267212fa" containerName="keystone-bootstrap" Feb 16 23:04:47 crc kubenswrapper[4865]: E0216 23:04:47.383560 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="dnsmasq-dns" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383569 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="dnsmasq-dns" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383786 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" containerName="placement-db-sync" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383828 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" containerName="dnsmasq-dns" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.383841 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0a70da-4482-4ab9-8503-c324267212fa" containerName="keystone-bootstrap" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.384508 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.389804 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sj5dj" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.390093 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.390216 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.390348 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.390378 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.390537 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.402594 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c88cfbc7-mhfsh"] Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.494754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-fernet-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.509938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-scripts\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.510915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-internal-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.515697 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-public-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.517677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-combined-ca-bundle\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.518161 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-credential-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.518228 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-config-data\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.518323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpk8n\" (UniqueName: \"kubernetes.io/projected/cd912ee6-bda4-4859-a70d-3f53ca61ba60-kube-api-access-qpk8n\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620486 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-public-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-combined-ca-bundle\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620584 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-credential-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620620 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-config-data\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpk8n\" (UniqueName: \"kubernetes.io/projected/cd912ee6-bda4-4859-a70d-3f53ca61ba60-kube-api-access-qpk8n\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620714 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-fernet-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-scripts\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.620823 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-internal-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.626449 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-config-data\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.627859 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-combined-ca-bundle\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.628715 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-internal-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.628876 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-fernet-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.629751 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-public-tls-certs\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.640503 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-scripts\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.650145 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd912ee6-bda4-4859-a70d-3f53ca61ba60-credential-keys\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.667258 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpk8n\" (UniqueName: \"kubernetes.io/projected/cd912ee6-bda4-4859-a70d-3f53ca61ba60-kube-api-access-qpk8n\") pod \"keystone-66c88cfbc7-mhfsh\" (UID: \"cd912ee6-bda4-4859-a70d-3f53ca61ba60\") " pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.709920 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5695f8dc4-jj7h5"] Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.722835 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.726862 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.730707 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.730929 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.731867 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7qdf4" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.731922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.733687 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.744995 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5695f8dc4-jj7h5"] Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.811440 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerStarted","Data":"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39"} Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-internal-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826468 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfpl\" (UniqueName: \"kubernetes.io/projected/51d7f054-f0b9-43fc-b704-ac61bd427bb0-kube-api-access-qkfpl\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826501 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d7f054-f0b9-43fc-b704-ac61bd427bb0-logs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826539 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-scripts\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826588 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-combined-ca-bundle\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-public-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.826670 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-config-data\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.928541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-public-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.928886 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-config-data\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.928913 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-internal-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.928952 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfpl\" (UniqueName: \"kubernetes.io/projected/51d7f054-f0b9-43fc-b704-ac61bd427bb0-kube-api-access-qkfpl\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.929006 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d7f054-f0b9-43fc-b704-ac61bd427bb0-logs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.929042 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-scripts\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.929090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-combined-ca-bundle\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.930194 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d7f054-f0b9-43fc-b704-ac61bd427bb0-logs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.963034 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-public-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.963300 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-scripts\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.963367 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-internal-tls-certs\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.968742 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfpl\" (UniqueName: \"kubernetes.io/projected/51d7f054-f0b9-43fc-b704-ac61bd427bb0-kube-api-access-qkfpl\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.968873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-config-data\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:47 crc kubenswrapper[4865]: I0216 23:04:47.969266 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d7f054-f0b9-43fc-b704-ac61bd427bb0-combined-ca-bundle\") pod \"placement-5695f8dc4-jj7h5\" (UID: \"51d7f054-f0b9-43fc-b704-ac61bd427bb0\") " pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.069031 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.261514 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c88cfbc7-mhfsh"] Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.296214 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.458381 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec86efb1-7717-4690-8027-22c32dbf537d" path="/var/lib/kubelet/pods/ec86efb1-7717-4690-8027-22c32dbf537d/volumes" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.720200 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5695f8dc4-jj7h5"] Feb 16 23:04:48 crc kubenswrapper[4865]: W0216 23:04:48.731013 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51d7f054_f0b9_43fc_b704_ac61bd427bb0.slice/crio-94f29824361556a93407a3db1b92c7b45a589d1f9728d93d731c60177b357ad3 WatchSource:0}: Error finding container 94f29824361556a93407a3db1b92c7b45a589d1f9728d93d731c60177b357ad3: Status 404 returned error can't find the container with id 94f29824361556a93407a3db1b92c7b45a589d1f9728d93d731c60177b357ad3 Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.837181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerStarted","Data":"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd"} Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.843452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c88cfbc7-mhfsh" event={"ID":"cd912ee6-bda4-4859-a70d-3f53ca61ba60","Type":"ContainerStarted","Data":"4142a1101a281a26393093cafce1ccefaf91833cef0306ea0bba2c60ec5c9f51"} Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.843493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c88cfbc7-mhfsh" event={"ID":"cd912ee6-bda4-4859-a70d-3f53ca61ba60","Type":"ContainerStarted","Data":"1da732c14008a3f2339a8b4933d92d9c3ce8baa43ff429e1d1f4a10b6b678daf"} Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.844271 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.846579 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5695f8dc4-jj7h5" event={"ID":"51d7f054-f0b9-43fc-b704-ac61bd427bb0","Type":"ContainerStarted","Data":"94f29824361556a93407a3db1b92c7b45a589d1f9728d93d731c60177b357ad3"} Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.858828 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.858809976 podStartE2EDuration="9.858809976s" podCreationTimestamp="2026-02-16 23:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:48.855539123 +0000 UTC m=+1129.179246084" watchObservedRunningTime="2026-02-16 23:04:48.858809976 +0000 UTC m=+1129.182516937" Feb 16 23:04:48 crc kubenswrapper[4865]: I0216 23:04:48.889412 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-66c88cfbc7-mhfsh" podStartSLOduration=1.8893900750000001 podStartE2EDuration="1.889390075s" podCreationTimestamp="2026-02-16 23:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:48.882714565 +0000 UTC m=+1129.206421536" watchObservedRunningTime="2026-02-16 23:04:48.889390075 +0000 UTC m=+1129.213097036" Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.899219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5695f8dc4-jj7h5" event={"ID":"51d7f054-f0b9-43fc-b704-ac61bd427bb0","Type":"ContainerStarted","Data":"0333984cad94f27eaa3d4f641deab96871737dca502867783ab07da691977faf"} Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.902325 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.917334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5695f8dc4-jj7h5" event={"ID":"51d7f054-f0b9-43fc-b704-ac61bd427bb0","Type":"ContainerStarted","Data":"f3635b02aeb1e61bcc44da9e61b8158b0ba9e4912e05c0aca338057be33c73d5"} Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.929957 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5695f8dc4-jj7h5" podStartSLOduration=2.929926928 podStartE2EDuration="2.929926928s" podCreationTimestamp="2026-02-16 23:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:04:49.929055623 +0000 UTC m=+1130.252762614" watchObservedRunningTime="2026-02-16 23:04:49.929926928 +0000 UTC m=+1130.253633889" Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.999149 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:49 crc kubenswrapper[4865]: I0216 23:04:49.999220 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.068925 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.079876 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.907007 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g9mg8" event={"ID":"c43aca4e-9612-43a8-8af2-5f32e4378af7","Type":"ContainerStarted","Data":"e1b05654316e4786cb81e24e1b0dde7418ed9441a4a91edb86ac5c3868d92144"} Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.911669 4865 generic.go:334] "Generic (PLEG): container finished" podID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" containerID="8b89cf994607ccbbc6f108879c3934e6c009b9745b40ebbd220533f5c1652bc5" exitCode=0 Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.911761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-94wth" event={"ID":"68d6bce0-a0b1-485b-b3fc-6c47cd966129","Type":"ContainerDied","Data":"8b89cf994607ccbbc6f108879c3934e6c009b9745b40ebbd220533f5c1652bc5"} Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.911908 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.912179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.912206 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:04:50 crc kubenswrapper[4865]: I0216 23:04:50.930875 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-g9mg8" podStartSLOduration=5.157156982 podStartE2EDuration="49.930849447s" podCreationTimestamp="2026-02-16 23:04:01 +0000 UTC" firstStartedPulling="2026-02-16 23:04:04.13083183 +0000 UTC m=+1084.454538791" lastFinishedPulling="2026-02-16 23:04:48.904524295 +0000 UTC m=+1129.228231256" observedRunningTime="2026-02-16 23:04:50.920297508 +0000 UTC m=+1131.244004459" watchObservedRunningTime="2026-02-16 23:04:50.930849447 +0000 UTC m=+1131.254556408" Feb 16 23:04:51 crc kubenswrapper[4865]: I0216 23:04:51.484399 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 16 23:04:51 crc kubenswrapper[4865]: I0216 23:04:51.632339 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7ff854866d-9gv97" podUID="17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 16 23:04:55 crc kubenswrapper[4865]: I0216 23:04:55.966830 4865 generic.go:334] "Generic (PLEG): container finished" podID="c43aca4e-9612-43a8-8af2-5f32e4378af7" containerID="e1b05654316e4786cb81e24e1b0dde7418ed9441a4a91edb86ac5c3868d92144" exitCode=0 Feb 16 23:04:55 crc kubenswrapper[4865]: I0216 23:04:55.967135 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g9mg8" event={"ID":"c43aca4e-9612-43a8-8af2-5f32e4378af7","Type":"ContainerDied","Data":"e1b05654316e4786cb81e24e1b0dde7418ed9441a4a91edb86ac5c3868d92144"} Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.354814 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459303 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459551 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459704 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtks\" (UniqueName: \"kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459721 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459843 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.459900 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts\") pod \"c43aca4e-9612-43a8-8af2-5f32e4378af7\" (UID: \"c43aca4e-9612-43a8-8af2-5f32e4378af7\") " Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.460390 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c43aca4e-9612-43a8-8af2-5f32e4378af7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.468678 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.470663 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts" (OuterVolumeSpecName: "scripts") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.470814 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks" (OuterVolumeSpecName: "kube-api-access-9xtks") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "kube-api-access-9xtks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.505263 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.523902 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data" (OuterVolumeSpecName: "config-data") pod "c43aca4e-9612-43a8-8af2-5f32e4378af7" (UID: "c43aca4e-9612-43a8-8af2-5f32e4378af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.561937 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.561972 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.561982 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.561990 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c43aca4e-9612-43a8-8af2-5f32e4378af7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.561998 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtks\" (UniqueName: \"kubernetes.io/projected/c43aca4e-9612-43a8-8af2-5f32e4378af7-kube-api-access-9xtks\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.994265 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-g9mg8" event={"ID":"c43aca4e-9612-43a8-8af2-5f32e4378af7","Type":"ContainerDied","Data":"b54137dd2f0dbd134f97455d4059b9181371c6beffdd10a40ae4a00613c970c8"} Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.994330 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54137dd2f0dbd134f97455d4059b9181371c6beffdd10a40ae4a00613c970c8" Feb 16 23:04:57 crc kubenswrapper[4865]: I0216 23:04:57.994376 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-g9mg8" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.046250 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.106476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v965s\" (UniqueName: \"kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s\") pod \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.106595 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data\") pod \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.106627 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle\") pod \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\" (UID: \"68d6bce0-a0b1-485b-b3fc-6c47cd966129\") " Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.112500 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "68d6bce0-a0b1-485b-b3fc-6c47cd966129" (UID: "68d6bce0-a0b1-485b-b3fc-6c47cd966129"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.114578 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s" (OuterVolumeSpecName: "kube-api-access-v965s") pod "68d6bce0-a0b1-485b-b3fc-6c47cd966129" (UID: "68d6bce0-a0b1-485b-b3fc-6c47cd966129"). InnerVolumeSpecName "kube-api-access-v965s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.175128 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68d6bce0-a0b1-485b-b3fc-6c47cd966129" (UID: "68d6bce0-a0b1-485b-b3fc-6c47cd966129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.211131 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v965s\" (UniqueName: \"kubernetes.io/projected/68d6bce0-a0b1-485b-b3fc-6c47cd966129-kube-api-access-v965s\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.211347 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.211419 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d6bce0-a0b1-485b-b3fc-6c47cd966129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.248654 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:04:58 crc kubenswrapper[4865]: E0216 23:04:58.249262 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" containerName="barbican-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.249365 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" containerName="barbican-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: E0216 23:04:58.249461 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" containerName="cinder-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.249543 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" containerName="cinder-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.249863 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" containerName="barbican-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.249949 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" containerName="cinder-db-sync" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.258538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.262851 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.263075 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.263203 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.263393 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sct9w" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.267798 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312241 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312328 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312395 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbsx\" (UniqueName: \"kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.312457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.371627 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.376740 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.407487 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.439992 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440077 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbsx\" (UniqueName: \"kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440160 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqbg5\" (UniqueName: \"kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440212 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440312 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.440422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.443037 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.452166 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.473506 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.474505 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.475766 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.483784 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbsx\" (UniqueName: \"kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx\") pod \"cinder-scheduler-0\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.523519 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.524886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.528626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.541602 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.541662 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.541689 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.541739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.541964 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.542016 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqbg5\" (UniqueName: \"kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.543633 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.543719 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.544231 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.544771 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.547021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.561975 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.572895 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.599778 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqbg5\" (UniqueName: \"kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5\") pod \"dnsmasq-dns-d68b9cb4c-npsfp\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.639000 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643406 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643598 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643671 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643739 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mmq\" (UniqueName: \"kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.643771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.746892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mmq\" (UniqueName: \"kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747135 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747160 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747323 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.747893 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.748050 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.748071 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.759074 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.761800 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.768154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.772788 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mmq\" (UniqueName: \"kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.773045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom\") pod \"cinder-api-0\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " pod="openstack/cinder-api-0" Feb 16 23:04:58 crc kubenswrapper[4865]: I0216 23:04:58.961597 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.064883 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerStarted","Data":"fb7af98a2656ededa3d5a7baef4381b99204f4af3fd5a27a78b47308bee55c8f"} Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.065240 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-central-agent" containerID="cri-o://04aed157248f61a7a8beeaa92b88351bcb0d08da029c6bdb89e6b2cebd23fad9" gracePeriod=30 Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.065535 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.065791 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="proxy-httpd" containerID="cri-o://fb7af98a2656ededa3d5a7baef4381b99204f4af3fd5a27a78b47308bee55c8f" gracePeriod=30 Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.065835 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="sg-core" containerID="cri-o://1bdf317171916a450298119302d7b50700ec5fb1a920dc1c862a330c6a5d19c8" gracePeriod=30 Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.065867 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-notification-agent" containerID="cri-o://55b1a0af898b5085dcca9774e9ba2e9a3a19e4d58b5df88f7c56cc3f8cfefae2" gracePeriod=30 Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.099396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-94wth" event={"ID":"68d6bce0-a0b1-485b-b3fc-6c47cd966129","Type":"ContainerDied","Data":"0ac924b55bded256308bdb1353b4ef1c487446512ba224d9c1f0c5d5431d968e"} Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.099437 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac924b55bded256308bdb1353b4ef1c487446512ba224d9c1f0c5d5431d968e" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.099567 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-94wth" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.120570 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9731406639999998 podStartE2EDuration="57.120549747s" podCreationTimestamp="2026-02-16 23:04:02 +0000 UTC" firstStartedPulling="2026-02-16 23:04:04.033628559 +0000 UTC m=+1084.357335520" lastFinishedPulling="2026-02-16 23:04:58.181037642 +0000 UTC m=+1138.504744603" observedRunningTime="2026-02-16 23:04:59.109272347 +0000 UTC m=+1139.432979308" watchObservedRunningTime="2026-02-16 23:04:59.120549747 +0000 UTC m=+1139.444256708" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.262794 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.286459 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.510779 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b6779c894-4z8tf"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.522456 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.535753 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.536031 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.536139 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9nfmk" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.559271 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58c998ff9-ghm8t"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.560970 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.596186 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601563 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data-custom\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601646 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601692 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-combined-ca-bundle\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6633f123-ac1f-4a25-b20d-0c0eda648f92-logs\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601871 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data-custom\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601932 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.601979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwwv\" (UniqueName: \"kubernetes.io/projected/6633f123-ac1f-4a25-b20d-0c0eda648f92-kube-api-access-zkwwv\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.602009 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-logs\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.602040 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtgf\" (UniqueName: \"kubernetes.io/projected/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-kube-api-access-mwtgf\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.627192 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6779c894-4z8tf"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.637054 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c998ff9-ghm8t"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.650067 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.668946 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.695447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtgf\" (UniqueName: \"kubernetes.io/projected/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-kube-api-access-mwtgf\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data-custom\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-combined-ca-bundle\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6633f123-ac1f-4a25-b20d-0c0eda648f92-logs\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711574 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data-custom\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711717 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwwv\" (UniqueName: \"kubernetes.io/projected/6633f123-ac1f-4a25-b20d-0c0eda648f92-kube-api-access-zkwwv\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.711749 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-logs\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.712482 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-logs\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.714859 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6633f123-ac1f-4a25-b20d-0c0eda648f92-logs\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.716930 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-combined-ca-bundle\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.726256 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data-custom\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.736006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data-custom\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.737206 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6633f123-ac1f-4a25-b20d-0c0eda648f92-config-data\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.740911 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-config-data\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.749574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwwv\" (UniqueName: \"kubernetes.io/projected/6633f123-ac1f-4a25-b20d-0c0eda648f92-kube-api-access-zkwwv\") pod \"barbican-worker-58c998ff9-ghm8t\" (UID: \"6633f123-ac1f-4a25-b20d-0c0eda648f92\") " pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.755983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.757056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtgf\" (UniqueName: \"kubernetes.io/projected/14d1a57c-7cda-4753-a6de-fe9a98f4fd02-kube-api-access-mwtgf\") pod \"barbican-keystone-listener-6b6779c894-4z8tf\" (UID: \"14d1a57c-7cda-4753-a6de-fe9a98f4fd02\") " pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.761097 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.813249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.815200 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.815417 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.815555 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.815785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.815840 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdz5\" (UniqueName: \"kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.867119 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.869338 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.872733 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.882790 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.903419 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917552 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917622 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917684 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917723 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917763 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917798 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdz5\" (UniqueName: \"kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917818 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zncp\" (UniqueName: \"kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917845 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.917895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.918936 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.919953 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.921058 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.922025 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.923008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.924062 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.931761 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c998ff9-ghm8t" Feb 16 23:04:59 crc kubenswrapper[4865]: I0216 23:04:59.943724 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdz5\" (UniqueName: \"kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5\") pod \"dnsmasq-dns-5784cf869f-jqvgs\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.019468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.019569 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zncp\" (UniqueName: \"kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.019618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.019678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.019712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.020839 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.024803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.025256 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.026303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.036307 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.039064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zncp\" (UniqueName: \"kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp\") pod \"barbican-api-7f77f74cd-fjwcd\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.158691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerStarted","Data":"9452eae6c7bfe22044de3e434b85417b0409d7281ed879e7786fe2844b5eb058"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.163022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerStarted","Data":"73dba4f98ca24b4c7f388eddce728f159701e7042fad8cfd59c411f01141c2ff"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.168285 4865 generic.go:334] "Generic (PLEG): container finished" podID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerID="fb7af98a2656ededa3d5a7baef4381b99204f4af3fd5a27a78b47308bee55c8f" exitCode=0 Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.168321 4865 generic.go:334] "Generic (PLEG): container finished" podID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerID="1bdf317171916a450298119302d7b50700ec5fb1a920dc1c862a330c6a5d19c8" exitCode=2 Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.168371 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerDied","Data":"fb7af98a2656ededa3d5a7baef4381b99204f4af3fd5a27a78b47308bee55c8f"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.168401 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerDied","Data":"1bdf317171916a450298119302d7b50700ec5fb1a920dc1c862a330c6a5d19c8"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.170252 4865 generic.go:334] "Generic (PLEG): container finished" podID="b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" containerID="8bd53e786d459af7d7a9b64bba300a5b8ee27f9d67c6c092c3d18c6f1cd36c8f" exitCode=0 Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.170271 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" event={"ID":"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2","Type":"ContainerDied","Data":"8bd53e786d459af7d7a9b64bba300a5b8ee27f9d67c6c092c3d18c6f1cd36c8f"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.170299 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" event={"ID":"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2","Type":"ContainerStarted","Data":"2dc28a413639ede2ff1492ad16a49357799e7a1f74886adcb1ba4520a7876de6"} Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.208040 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.747065 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c998ff9-ghm8t"] Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.899117 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:00 crc kubenswrapper[4865]: I0216 23:05:00.942077 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6779c894-4z8tf"] Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.016090 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.064969 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.065078 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqbg5\" (UniqueName: \"kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.065139 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.065236 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.065306 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.065457 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0\") pod \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\" (UID: \"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.079833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5" (OuterVolumeSpecName: "kube-api-access-xqbg5") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "kube-api-access-xqbg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.115730 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.118209 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.119770 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.123665 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.125461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.168980 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.169039 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.169055 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.169067 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqbg5\" (UniqueName: \"kubernetes.io/projected/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-kube-api-access-xqbg5\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.169081 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.216865 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config" (OuterVolumeSpecName: "config") pod "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" (UID: "b75dfa8d-c91f-467d-9967-e4cfbe6f10c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.264024 4865 generic.go:334] "Generic (PLEG): container finished" podID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerID="55b1a0af898b5085dcca9774e9ba2e9a3a19e4d58b5df88f7c56cc3f8cfefae2" exitCode=0 Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.264259 4865 generic.go:334] "Generic (PLEG): container finished" podID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerID="04aed157248f61a7a8beeaa92b88351bcb0d08da029c6bdb89e6b2cebd23fad9" exitCode=0 Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.264325 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerDied","Data":"55b1a0af898b5085dcca9774e9ba2e9a3a19e4d58b5df88f7c56cc3f8cfefae2"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.264351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerDied","Data":"04aed157248f61a7a8beeaa92b88351bcb0d08da029c6bdb89e6b2cebd23fad9"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.280257 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.285697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" event={"ID":"b75dfa8d-c91f-467d-9967-e4cfbe6f10c2","Type":"ContainerDied","Data":"2dc28a413639ede2ff1492ad16a49357799e7a1f74886adcb1ba4520a7876de6"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.294796 4865 scope.go:117] "RemoveContainer" containerID="8bd53e786d459af7d7a9b64bba300a5b8ee27f9d67c6c092c3d18c6f1cd36c8f" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.285836 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-npsfp" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.296628 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" event={"ID":"89374364-2643-4580-9c44-14e3b944111f","Type":"ContainerStarted","Data":"d2d362c6de8e545253279d35aa8ed383ff05d10411dcdd59880a7c2430489b1b"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.296777 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.316366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c998ff9-ghm8t" event={"ID":"6633f123-ac1f-4a25-b20d-0c0eda648f92","Type":"ContainerStarted","Data":"5a3825b12af7d4302b2665fe53ed5bd782e023288e21acd07ca73e14467aa2fe"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.318914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerStarted","Data":"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.320715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" event={"ID":"14d1a57c-7cda-4753-a6de-fe9a98f4fd02","Type":"ContainerStarted","Data":"2c758d9bc435e7113e48a0cba31ebcdf9b93cace9609c19a05f15e3ccdc8ad2a"} Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.602639 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.614172 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-npsfp"] Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.628910 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7ff854866d-9gv97" podUID="17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.746201 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.806957 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.807001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.807057 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6mt7\" (UniqueName: \"kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.807498 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.807922 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.808019 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.808069 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.808096 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data\") pod \"1bbb6cce-272f-421d-a4f2-af006f112e21\" (UID: \"1bbb6cce-272f-421d-a4f2-af006f112e21\") " Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.808955 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.809742 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.834018 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7" (OuterVolumeSpecName: "kube-api-access-c6mt7") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "kube-api-access-c6mt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.834362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts" (OuterVolumeSpecName: "scripts") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.910567 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.910595 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6mt7\" (UniqueName: \"kubernetes.io/projected/1bbb6cce-272f-421d-a4f2-af006f112e21-kube-api-access-c6mt7\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.910606 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bbb6cce-272f-421d-a4f2-af006f112e21-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.924900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:01 crc kubenswrapper[4865]: I0216 23:05:01.971968 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.012935 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.013156 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.018979 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data" (OuterVolumeSpecName: "config-data") pod "1bbb6cce-272f-421d-a4f2-af006f112e21" (UID: "1bbb6cce-272f-421d-a4f2-af006f112e21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.115020 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbb6cce-272f-421d-a4f2-af006f112e21-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.346885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bbb6cce-272f-421d-a4f2-af006f112e21","Type":"ContainerDied","Data":"0116c4f146be1dc6f311b6b2877c68df33b33b6134b395049455d8efbb00e4fa"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.346934 4865 scope.go:117] "RemoveContainer" containerID="fb7af98a2656ededa3d5a7baef4381b99204f4af3fd5a27a78b47308bee55c8f" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.347055 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.354356 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerStarted","Data":"4003425ce35786dec2edf740410beaa648ccc656f20188ed7f9f93c96392bb92"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.354397 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerStarted","Data":"f2e0105bc78671b710b1b4c37f6e2c88c05fbf346b3b13433c002a5f41f3e344"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.354407 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerStarted","Data":"28064323e15183dae5381ac19440fe6e30eed5e4cd50a61d11c3053afeb1634c"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.355215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.355235 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.358806 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerStarted","Data":"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.358969 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api-log" containerID="cri-o://08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b" gracePeriod=30 Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.359218 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.359297 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api" containerID="cri-o://7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1" gracePeriod=30 Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.368875 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerStarted","Data":"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.391117 4865 generic.go:334] "Generic (PLEG): container finished" podID="89374364-2643-4580-9c44-14e3b944111f" containerID="ca6934ff2e0a441de9f11431397499e0a4f499b337ebf3d31a4170a18ab77512" exitCode=0 Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.391169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" event={"ID":"89374364-2643-4580-9c44-14e3b944111f","Type":"ContainerDied","Data":"ca6934ff2e0a441de9f11431397499e0a4f499b337ebf3d31a4170a18ab77512"} Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.398624 4865 scope.go:117] "RemoveContainer" containerID="1bdf317171916a450298119302d7b50700ec5fb1a920dc1c862a330c6a5d19c8" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.407688 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.40766761 podStartE2EDuration="4.40766761s" podCreationTimestamp="2026-02-16 23:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:02.401516336 +0000 UTC m=+1142.725223297" watchObservedRunningTime="2026-02-16 23:05:02.40766761 +0000 UTC m=+1142.731374571" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.426164 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f77f74cd-fjwcd" podStartSLOduration=3.426145675 podStartE2EDuration="3.426145675s" podCreationTimestamp="2026-02-16 23:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:02.379825999 +0000 UTC m=+1142.703532960" watchObservedRunningTime="2026-02-16 23:05:02.426145675 +0000 UTC m=+1142.749852636" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.519882 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" path="/var/lib/kubelet/pods/b75dfa8d-c91f-467d-9967-e4cfbe6f10c2/volumes" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524216 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524242 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524258 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:02 crc kubenswrapper[4865]: E0216 23:05:02.524544 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-central-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524555 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-central-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: E0216 23:05:02.524581 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" containerName="init" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524590 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" containerName="init" Feb 16 23:05:02 crc kubenswrapper[4865]: E0216 23:05:02.524598 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="proxy-httpd" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524604 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="proxy-httpd" Feb 16 23:05:02 crc kubenswrapper[4865]: E0216 23:05:02.524618 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-notification-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.524623 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-notification-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: E0216 23:05:02.525404 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="sg-core" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525414 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="sg-core" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525590 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-notification-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525599 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75dfa8d-c91f-467d-9967-e4cfbe6f10c2" containerName="init" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525612 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="ceilometer-central-agent" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525618 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="sg-core" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.525632 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" containerName="proxy-httpd" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.528837 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.542633 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.543710 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.545132 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.621200 4865 scope.go:117] "RemoveContainer" containerID="55b1a0af898b5085dcca9774e9ba2e9a3a19e4d58b5df88f7c56cc3f8cfefae2" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633600 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633691 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633758 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633782 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd45v\" (UniqueName: \"kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.633824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.688496 4865 scope.go:117] "RemoveContainer" containerID="04aed157248f61a7a8beeaa92b88351bcb0d08da029c6bdb89e6b2cebd23fad9" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.746147 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd45v\" (UniqueName: \"kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.746312 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.748207 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.748270 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.748364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.748396 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.748567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.749286 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.751415 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.752833 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.753608 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.757713 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.760308 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.769586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd45v\" (UniqueName: \"kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v\") pod \"ceilometer-0\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " pod="openstack/ceilometer-0" Feb 16 23:05:02 crc kubenswrapper[4865]: I0216 23:05:02.918052 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.412077 4865 generic.go:334] "Generic (PLEG): container finished" podID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerID="08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b" exitCode=143 Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.412160 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerDied","Data":"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b"} Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.415305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerStarted","Data":"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051"} Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.423641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" event={"ID":"89374364-2643-4580-9c44-14e3b944111f","Type":"ContainerStarted","Data":"7644709298f502212c13dd7d4f322537fee45453d0039691bab02cc33211e571"} Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.423717 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.442218 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.199746506 podStartE2EDuration="5.442195294s" podCreationTimestamp="2026-02-16 23:04:58 +0000 UTC" firstStartedPulling="2026-02-16 23:04:59.32131445 +0000 UTC m=+1139.645021411" lastFinishedPulling="2026-02-16 23:05:00.563763238 +0000 UTC m=+1140.887470199" observedRunningTime="2026-02-16 23:05:03.433882228 +0000 UTC m=+1143.757589209" watchObservedRunningTime="2026-02-16 23:05:03.442195294 +0000 UTC m=+1143.765902255" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.455614 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" podStartSLOduration=4.455595784 podStartE2EDuration="4.455595784s" podCreationTimestamp="2026-02-16 23:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:03.453655099 +0000 UTC m=+1143.777362060" watchObservedRunningTime="2026-02-16 23:05:03.455595784 +0000 UTC m=+1143.779302745" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.482600 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.574661 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.718510 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.718808 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-854576c7c7-t47q8" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-api" containerID="cri-o://dd461fd86f89bf6d7bbdc9c6cc260d2415b6afd1ae5495e82170f62b2618bbae" gracePeriod=30 Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.718884 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-854576c7c7-t47q8" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-httpd" containerID="cri-o://479f004d56b8a57c569210823ae832b6c6a76320b35cb2f027d19fb019168dd8" gracePeriod=30 Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.731443 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.751409 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84f9dbdcc7-p5njv"] Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.763685 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f9dbdcc7-p5njv"] Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.764065 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.886738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-combined-ca-bundle\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.886793 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6dl9\" (UniqueName: \"kubernetes.io/projected/1828fcf9-f296-46f5-a15d-7280fe715721-kube-api-access-j6dl9\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.887004 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-ovndb-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.887097 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-internal-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.887178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-httpd-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.887372 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.887777 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-public-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.990977 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-ovndb-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-internal-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991082 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-httpd-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991176 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-public-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991230 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-combined-ca-bundle\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.991246 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6dl9\" (UniqueName: \"kubernetes.io/projected/1828fcf9-f296-46f5-a15d-7280fe715721-kube-api-access-j6dl9\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.998758 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-ovndb-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:03 crc kubenswrapper[4865]: I0216 23:05:03.999639 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-public-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.001363 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-httpd-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.003084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-internal-tls-certs\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.004264 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-combined-ca-bundle\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.007870 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1828fcf9-f296-46f5-a15d-7280fe715721-config\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.010136 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6dl9\" (UniqueName: \"kubernetes.io/projected/1828fcf9-f296-46f5-a15d-7280fe715721-kube-api-access-j6dl9\") pod \"neutron-84f9dbdcc7-p5njv\" (UID: \"1828fcf9-f296-46f5-a15d-7280fe715721\") " pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.090559 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.359370 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.363444 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-54bd7477c8-zrrzr"] Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.412926 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.420205 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.420542 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.511773 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-internal-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.511849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-public-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.511882 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-combined-ca-bundle\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.511911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data-custom\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.511933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c454ac8-1c92-42d1-a889-6f42e4d73f86-logs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.512051 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.512078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qps\" (UniqueName: \"kubernetes.io/projected/8c454ac8-1c92-42d1-a889-6f42e4d73f86-kube-api-access-d7qps\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.530376 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbb6cce-272f-421d-a4f2-af006f112e21" path="/var/lib/kubelet/pods/1bbb6cce-272f-421d-a4f2-af006f112e21/volumes" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.531109 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54bd7477c8-zrrzr"] Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613484 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qps\" (UniqueName: \"kubernetes.io/projected/8c454ac8-1c92-42d1-a889-6f42e4d73f86-kube-api-access-d7qps\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613546 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-internal-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613581 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-public-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613611 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-combined-ca-bundle\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data-custom\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.613663 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c454ac8-1c92-42d1-a889-6f42e4d73f86-logs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.614182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c454ac8-1c92-42d1-a889-6f42e4d73f86-logs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.617866 4865 generic.go:334] "Generic (PLEG): container finished" podID="45e13675-3a58-42c2-9236-eab676096763" containerID="aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91" exitCode=137 Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.617961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerDied","Data":"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91"} Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.637230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-public-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.639074 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.639210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-internal-tls-certs\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.659421 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-combined-ca-bundle\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.659877 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c454ac8-1c92-42d1-a889-6f42e4d73f86-config-data-custom\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.686510 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.689005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qps\" (UniqueName: \"kubernetes.io/projected/8c454ac8-1c92-42d1-a889-6f42e4d73f86-kube-api-access-d7qps\") pod \"barbican-api-54bd7477c8-zrrzr\" (UID: \"8c454ac8-1c92-42d1-a889-6f42e4d73f86\") " pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.716066 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" event={"ID":"14d1a57c-7cda-4753-a6de-fe9a98f4fd02","Type":"ContainerStarted","Data":"1a6fed8a2cffc8acf900e4887d93fa62acb1a71cbae88b8e3ed502b85937655b"} Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.762467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c998ff9-ghm8t" event={"ID":"6633f123-ac1f-4a25-b20d-0c0eda648f92","Type":"ContainerStarted","Data":"09fcf30786a04b0a2d8685f4fc3aba69bb88606003ddcf0ab5866d861c9f1ff3"} Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.791564 4865 generic.go:334] "Generic (PLEG): container finished" podID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerID="479f004d56b8a57c569210823ae832b6c6a76320b35cb2f027d19fb019168dd8" exitCode=0 Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.791680 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerDied","Data":"479f004d56b8a57c569210823ae832b6c6a76320b35cb2f027d19fb019168dd8"} Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.816150 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:04 crc kubenswrapper[4865]: I0216 23:05:04.994986 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84f9dbdcc7-p5njv"] Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.217943 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.366663 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts\") pod \"70db55c6-255b-4aab-8d14-2675be446bfb\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.367044 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key\") pod \"70db55c6-255b-4aab-8d14-2675be446bfb\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.367077 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data\") pod \"70db55c6-255b-4aab-8d14-2675be446bfb\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.367163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxb5k\" (UniqueName: \"kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k\") pod \"70db55c6-255b-4aab-8d14-2675be446bfb\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.367242 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs\") pod \"70db55c6-255b-4aab-8d14-2675be446bfb\" (UID: \"70db55c6-255b-4aab-8d14-2675be446bfb\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.372567 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs" (OuterVolumeSpecName: "logs") pod "70db55c6-255b-4aab-8d14-2675be446bfb" (UID: "70db55c6-255b-4aab-8d14-2675be446bfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.373852 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70db55c6-255b-4aab-8d14-2675be446bfb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.378981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k" (OuterVolumeSpecName: "kube-api-access-jxb5k") pod "70db55c6-255b-4aab-8d14-2675be446bfb" (UID: "70db55c6-255b-4aab-8d14-2675be446bfb"). InnerVolumeSpecName "kube-api-access-jxb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.386405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "70db55c6-255b-4aab-8d14-2675be446bfb" (UID: "70db55c6-255b-4aab-8d14-2675be446bfb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.420782 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data" (OuterVolumeSpecName: "config-data") pod "70db55c6-255b-4aab-8d14-2675be446bfb" (UID: "70db55c6-255b-4aab-8d14-2675be446bfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.461830 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts" (OuterVolumeSpecName: "scripts") pod "70db55c6-255b-4aab-8d14-2675be446bfb" (UID: "70db55c6-255b-4aab-8d14-2675be446bfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.475605 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxb5k\" (UniqueName: \"kubernetes.io/projected/70db55c6-255b-4aab-8d14-2675be446bfb-kube-api-access-jxb5k\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.475647 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.475657 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70db55c6-255b-4aab-8d14-2675be446bfb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.475666 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70db55c6-255b-4aab-8d14-2675be446bfb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.496033 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54bd7477c8-zrrzr"] Feb 16 23:05:05 crc kubenswrapper[4865]: W0216 23:05:05.509050 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c454ac8_1c92_42d1_a889_6f42e4d73f86.slice/crio-b87f0d70a3796d729f7b93628420bf08571185577491d5b56891c523b18fa10e WatchSource:0}: Error finding container b87f0d70a3796d729f7b93628420bf08571185577491d5b56891c523b18fa10e: Status 404 returned error can't find the container with id b87f0d70a3796d729f7b93628420bf08571185577491d5b56891c523b18fa10e Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.541521 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.679131 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data\") pod \"45e13675-3a58-42c2-9236-eab676096763\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.679202 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts\") pod \"45e13675-3a58-42c2-9236-eab676096763\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.679370 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztnlf\" (UniqueName: \"kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf\") pod \"45e13675-3a58-42c2-9236-eab676096763\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.679407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs\") pod \"45e13675-3a58-42c2-9236-eab676096763\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.679484 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key\") pod \"45e13675-3a58-42c2-9236-eab676096763\" (UID: \"45e13675-3a58-42c2-9236-eab676096763\") " Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.682074 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs" (OuterVolumeSpecName: "logs") pod "45e13675-3a58-42c2-9236-eab676096763" (UID: "45e13675-3a58-42c2-9236-eab676096763"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.685462 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "45e13675-3a58-42c2-9236-eab676096763" (UID: "45e13675-3a58-42c2-9236-eab676096763"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.685841 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf" (OuterVolumeSpecName: "kube-api-access-ztnlf") pod "45e13675-3a58-42c2-9236-eab676096763" (UID: "45e13675-3a58-42c2-9236-eab676096763"). InnerVolumeSpecName "kube-api-access-ztnlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.705972 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data" (OuterVolumeSpecName: "config-data") pod "45e13675-3a58-42c2-9236-eab676096763" (UID: "45e13675-3a58-42c2-9236-eab676096763"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.721246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts" (OuterVolumeSpecName: "scripts") pod "45e13675-3a58-42c2-9236-eab676096763" (UID: "45e13675-3a58-42c2-9236-eab676096763"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.783726 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45e13675-3a58-42c2-9236-eab676096763-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.783997 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.784054 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45e13675-3a58-42c2-9236-eab676096763-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.784107 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztnlf\" (UniqueName: \"kubernetes.io/projected/45e13675-3a58-42c2-9236-eab676096763-kube-api-access-ztnlf\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.784164 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e13675-3a58-42c2-9236-eab676096763-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.809480 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" event={"ID":"14d1a57c-7cda-4753-a6de-fe9a98f4fd02","Type":"ContainerStarted","Data":"0e67cb0df9342c32d1c4f18c37176867aba07fdf37e9afbe76ab68a9a479f436"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.813658 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c998ff9-ghm8t" event={"ID":"6633f123-ac1f-4a25-b20d-0c0eda648f92","Type":"ContainerStarted","Data":"b05514afef80fdd872999e7227ad2058dab725ce9bca0c44ed372d185bb41df4"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.823749 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerStarted","Data":"b3f140d521598886fc77d9042b317f3d7b0c90b578fe789b1a3567c8a774e1e7"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.823794 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerStarted","Data":"0bc6ec39a68b03ab090891c1e8767244b7785c6a8ba1f611baa7b673deb72d85"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.832180 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54bd7477c8-zrrzr" event={"ID":"8c454ac8-1c92-42d1-a889-6f42e4d73f86","Type":"ContainerStarted","Data":"89276e464dce7d1a35353929e9f0209780daa385b02f38b21c7b439164fe206e"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.832224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54bd7477c8-zrrzr" event={"ID":"8c454ac8-1c92-42d1-a889-6f42e4d73f86","Type":"ContainerStarted","Data":"b87f0d70a3796d729f7b93628420bf08571185577491d5b56891c523b18fa10e"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.832337 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b6779c894-4z8tf" podStartSLOduration=3.722316308 podStartE2EDuration="6.83231984s" podCreationTimestamp="2026-02-16 23:04:59 +0000 UTC" firstStartedPulling="2026-02-16 23:05:00.922454106 +0000 UTC m=+1141.246161067" lastFinishedPulling="2026-02-16 23:05:04.032457638 +0000 UTC m=+1144.356164599" observedRunningTime="2026-02-16 23:05:05.831742803 +0000 UTC m=+1146.155449764" watchObservedRunningTime="2026-02-16 23:05:05.83231984 +0000 UTC m=+1146.156026801" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.846560 4865 generic.go:334] "Generic (PLEG): container finished" podID="45e13675-3a58-42c2-9236-eab676096763" containerID="b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7" exitCode=137 Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.846633 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerDied","Data":"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.846659 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df6bbb45-chfb9" event={"ID":"45e13675-3a58-42c2-9236-eab676096763","Type":"ContainerDied","Data":"12a6e0b7f8b4b7e080ab29deb77180837ee2f2d809531019a6498ceab4c333c3"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.846679 4865 scope.go:117] "RemoveContainer" containerID="b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.846823 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df6bbb45-chfb9" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862331 4865 generic.go:334] "Generic (PLEG): container finished" podID="70db55c6-255b-4aab-8d14-2675be446bfb" containerID="7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" exitCode=137 Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862545 4865 generic.go:334] "Generic (PLEG): container finished" podID="70db55c6-255b-4aab-8d14-2675be446bfb" containerID="d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" exitCode=137 Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerDied","Data":"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerDied","Data":"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862666 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f446879fc-6xxxh" event={"ID":"70db55c6-255b-4aab-8d14-2675be446bfb","Type":"ContainerDied","Data":"7fec2ae4ece50499758b15aa28d390aa7f97ebfaeaf98120d3872d2c60771be0"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.862378 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f446879fc-6xxxh" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.868156 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58c998ff9-ghm8t" podStartSLOduration=3.657636821 podStartE2EDuration="6.868136187s" podCreationTimestamp="2026-02-16 23:04:59 +0000 UTC" firstStartedPulling="2026-02-16 23:05:00.818413131 +0000 UTC m=+1141.142120092" lastFinishedPulling="2026-02-16 23:05:04.028912497 +0000 UTC m=+1144.352619458" observedRunningTime="2026-02-16 23:05:05.853271345 +0000 UTC m=+1146.176978306" watchObservedRunningTime="2026-02-16 23:05:05.868136187 +0000 UTC m=+1146.191843148" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.885032 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f9dbdcc7-p5njv" event={"ID":"1828fcf9-f296-46f5-a15d-7280fe715721","Type":"ContainerStarted","Data":"b539bc7ad82e0e7807cb369f0d1377f6fd90e481d1ba6c09a4a16877a823cc8d"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.885081 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.885095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f9dbdcc7-p5njv" event={"ID":"1828fcf9-f296-46f5-a15d-7280fe715721","Type":"ContainerStarted","Data":"9ed62bb89de5a04c62eb658bbb18625535d42ac299d02dd16493140e9797c098"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.885105 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84f9dbdcc7-p5njv" event={"ID":"1828fcf9-f296-46f5-a15d-7280fe715721","Type":"ContainerStarted","Data":"e9d15d69bd5e2ce1f41ee9b786ef2f08840641587f73d901ab2f340dd1de6be7"} Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.907337 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.947340 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7df6bbb45-chfb9"] Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.957113 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84f9dbdcc7-p5njv" podStartSLOduration=2.957089553 podStartE2EDuration="2.957089553s" podCreationTimestamp="2026-02-16 23:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:05.907796093 +0000 UTC m=+1146.231503054" watchObservedRunningTime="2026-02-16 23:05:05.957089553 +0000 UTC m=+1146.280796514" Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.966333 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:05:05 crc kubenswrapper[4865]: I0216 23:05:05.975269 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f446879fc-6xxxh"] Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.086061 4865 scope.go:117] "RemoveContainer" containerID="aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.105068 4865 scope.go:117] "RemoveContainer" containerID="b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7" Feb 16 23:05:06 crc kubenswrapper[4865]: E0216 23:05:06.105491 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7\": container with ID starting with b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7 not found: ID does not exist" containerID="b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.105536 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7"} err="failed to get container status \"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7\": rpc error: code = NotFound desc = could not find container \"b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7\": container with ID starting with b82ae186bb0fcb67df24ee52963c8d18b75df3868c5106186bf9a30030b020c7 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.105562 4865 scope.go:117] "RemoveContainer" containerID="aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91" Feb 16 23:05:06 crc kubenswrapper[4865]: E0216 23:05:06.105852 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91\": container with ID starting with aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91 not found: ID does not exist" containerID="aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.105884 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91"} err="failed to get container status \"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91\": rpc error: code = NotFound desc = could not find container \"aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91\": container with ID starting with aea1fe96d19fce750f800cfc99d4cfb04192853bd664b834ffa8d5ecfc916b91 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.105907 4865 scope.go:117] "RemoveContainer" containerID="7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.154510 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-854576c7c7-t47q8" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.289391 4865 scope.go:117] "RemoveContainer" containerID="d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.336470 4865 scope.go:117] "RemoveContainer" containerID="7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" Feb 16 23:05:06 crc kubenswrapper[4865]: E0216 23:05:06.336969 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3\": container with ID starting with 7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3 not found: ID does not exist" containerID="7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337011 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3"} err="failed to get container status \"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3\": rpc error: code = NotFound desc = could not find container \"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3\": container with ID starting with 7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337038 4865 scope.go:117] "RemoveContainer" containerID="d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" Feb 16 23:05:06 crc kubenswrapper[4865]: E0216 23:05:06.337321 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451\": container with ID starting with d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451 not found: ID does not exist" containerID="d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337358 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451"} err="failed to get container status \"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451\": rpc error: code = NotFound desc = could not find container \"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451\": container with ID starting with d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337383 4865 scope.go:117] "RemoveContainer" containerID="7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337614 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3"} err="failed to get container status \"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3\": rpc error: code = NotFound desc = could not find container \"7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3\": container with ID starting with 7033510b24be1f83bd260184e30f8b290cbbfd73eeba3451affa576310230dd3 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337636 4865 scope.go:117] "RemoveContainer" containerID="d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.337865 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451"} err="failed to get container status \"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451\": rpc error: code = NotFound desc = could not find container \"d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451\": container with ID starting with d719194b98599c5fa31c1043d6758ccc8f7dc3441f9c9325939f83511cbc8451 not found: ID does not exist" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.431916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e13675-3a58-42c2-9236-eab676096763" path="/var/lib/kubelet/pods/45e13675-3a58-42c2-9236-eab676096763/volumes" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.432659 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" path="/var/lib/kubelet/pods/70db55c6-255b-4aab-8d14-2675be446bfb/volumes" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.894185 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerStarted","Data":"73f69acc01c940e785cc2e29b00b83dbfbe2f79c5a7893b5582c36c56e57b007"} Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.897031 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54bd7477c8-zrrzr" event={"ID":"8c454ac8-1c92-42d1-a889-6f42e4d73f86","Type":"ContainerStarted","Data":"44eb2490c92e472280b59da7d9f10da909e2a95a0f78d3423304ecb5ede39e3d"} Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.897090 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.927033 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:05:06 crc kubenswrapper[4865]: I0216 23:05:06.948993 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-54bd7477c8-zrrzr" podStartSLOduration=2.948952145 podStartE2EDuration="2.948952145s" podCreationTimestamp="2026-02-16 23:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:06.918604383 +0000 UTC m=+1147.242311354" watchObservedRunningTime="2026-02-16 23:05:06.948952145 +0000 UTC m=+1147.272659106" Feb 16 23:05:07 crc kubenswrapper[4865]: I0216 23:05:07.914376 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerStarted","Data":"12b956c3e0b770f84fd66d1523afae9683cb23fdf2697a095dae25f90d47b79d"} Feb 16 23:05:07 crc kubenswrapper[4865]: I0216 23:05:07.914687 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.811234 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.874771 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.924388 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerStarted","Data":"3292a7a1840a5a37f62b68eb283043567581fc03c57c17aeaf3480e5efb1fca8"} Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.924525 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="cinder-scheduler" containerID="cri-o://9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa" gracePeriod=30 Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.924633 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="probe" containerID="cri-o://30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051" gracePeriod=30 Feb 16 23:05:08 crc kubenswrapper[4865]: I0216 23:05:08.962865 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.4550279440000002 podStartE2EDuration="6.962836805s" podCreationTimestamp="2026-02-16 23:05:02 +0000 UTC" firstStartedPulling="2026-02-16 23:05:04.884398856 +0000 UTC m=+1145.208105817" lastFinishedPulling="2026-02-16 23:05:08.392207697 +0000 UTC m=+1148.715914678" observedRunningTime="2026-02-16 23:05:08.955356622 +0000 UTC m=+1149.279063593" watchObservedRunningTime="2026-02-16 23:05:08.962836805 +0000 UTC m=+1149.286543796" Feb 16 23:05:09 crc kubenswrapper[4865]: I0216 23:05:09.943081 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerID="30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051" exitCode=0 Feb 16 23:05:09 crc kubenswrapper[4865]: I0216 23:05:09.943178 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerDied","Data":"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051"} Feb 16 23:05:09 crc kubenswrapper[4865]: I0216 23:05:09.951551 4865 generic.go:334] "Generic (PLEG): container finished" podID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerID="dd461fd86f89bf6d7bbdc9c6cc260d2415b6afd1ae5495e82170f62b2618bbae" exitCode=0 Feb 16 23:05:09 crc kubenswrapper[4865]: I0216 23:05:09.952085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerDied","Data":"dd461fd86f89bf6d7bbdc9c6cc260d2415b6afd1ae5495e82170f62b2618bbae"} Feb 16 23:05:09 crc kubenswrapper[4865]: I0216 23:05:09.952225 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.041384 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.148988 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.149255 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="dnsmasq-dns" containerID="cri-o://dcfde899eb7c86d6c47684aa3e3841f201dc92ccc23dbc31377bd673066f16a0" gracePeriod=10 Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.419581 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.444860 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjt2j\" (UniqueName: \"kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.444941 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.444973 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.445043 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.445072 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.445098 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.445116 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.452536 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j" (OuterVolumeSpecName: "kube-api-access-kjt2j") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "kube-api-access-kjt2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.479586 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.495408 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.546672 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.572741 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") pod \"08a3ce49-d4ac-4627-b5cb-65305d115cef\" (UID: \"08a3ce49-d4ac-4627-b5cb-65305d115cef\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.574168 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjt2j\" (UniqueName: \"kubernetes.io/projected/08a3ce49-d4ac-4627-b5cb-65305d115cef-kube-api-access-kjt2j\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.574189 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: W0216 23:05:10.574463 4865 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/08a3ce49-d4ac-4627-b5cb-65305d115cef/volumes/kubernetes.io~secret/combined-ca-bundle Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.574477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.579318 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.589484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.590936 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config" (OuterVolumeSpecName: "config") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.640259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08a3ce49-d4ac-4627-b5cb-65305d115cef" (UID: "08a3ce49-d4ac-4627-b5cb-65305d115cef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679108 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679240 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbbsx\" (UniqueName: \"kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679351 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679404 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679430 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679621 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.679716 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle\") pod \"b2129625-645f-49f5-8d84-9ebc29f478d9\" (UID: \"b2129625-645f-49f5-8d84-9ebc29f478d9\") " Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680596 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680625 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680639 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680654 4865 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680665 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2129625-645f-49f5-8d84-9ebc29f478d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.680680 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08a3ce49-d4ac-4627-b5cb-65305d115cef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.705500 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx" (OuterVolumeSpecName: "kube-api-access-rbbsx") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "kube-api-access-rbbsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.705597 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.720418 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts" (OuterVolumeSpecName: "scripts") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.784343 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.784637 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.784648 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbbsx\" (UniqueName: \"kubernetes.io/projected/b2129625-645f-49f5-8d84-9ebc29f478d9-kube-api-access-rbbsx\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.888206 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data" (OuterVolumeSpecName: "config-data") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.948846 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2129625-645f-49f5-8d84-9ebc29f478d9" (UID: "b2129625-645f-49f5-8d84-9ebc29f478d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.987266 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:10 crc kubenswrapper[4865]: I0216 23:05:10.987321 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2129625-645f-49f5-8d84-9ebc29f478d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.006632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854576c7c7-t47q8" event={"ID":"08a3ce49-d4ac-4627-b5cb-65305d115cef","Type":"ContainerDied","Data":"1c3bb27ff482055607f8ee9914c6c17d930f8ef1fef10dc4a4d2f34a778b830e"} Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.006682 4865 scope.go:117] "RemoveContainer" containerID="479f004d56b8a57c569210823ae832b6c6a76320b35cb2f027d19fb019168dd8" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.006802 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854576c7c7-t47q8" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.020175 4865 generic.go:334] "Generic (PLEG): container finished" podID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerID="dcfde899eb7c86d6c47684aa3e3841f201dc92ccc23dbc31377bd673066f16a0" exitCode=0 Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.020338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" event={"ID":"fe2a586d-4955-4b3d-8299-d5ea37cfe736","Type":"ContainerDied","Data":"dcfde899eb7c86d6c47684aa3e3841f201dc92ccc23dbc31377bd673066f16a0"} Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.020425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" event={"ID":"fe2a586d-4955-4b3d-8299-d5ea37cfe736","Type":"ContainerDied","Data":"eaf9c894b346fb5f635c2795318a295dab4d7de9b040494812af1f0b64651804"} Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.020489 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf9c894b346fb5f635c2795318a295dab4d7de9b040494812af1f0b64651804" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.028365 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerID="9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa" exitCode=0 Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.028843 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.029390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerDied","Data":"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa"} Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.029420 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b2129625-645f-49f5-8d84-9ebc29f478d9","Type":"ContainerDied","Data":"73dba4f98ca24b4c7f388eddce728f159701e7042fad8cfd59c411f01141c2ff"} Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.180367 4865 scope.go:117] "RemoveContainer" containerID="dd461fd86f89bf6d7bbdc9c6cc260d2415b6afd1ae5495e82170f62b2618bbae" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.262178 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.273673 4865 scope.go:117] "RemoveContainer" containerID="30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.295155 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297364 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297440 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297493 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26q5\" (UniqueName: \"kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297541 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297576 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.297610 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config\") pod \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\" (UID: \"fe2a586d-4955-4b3d-8299-d5ea37cfe736\") " Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.308357 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5" (OuterVolumeSpecName: "kube-api-access-m26q5") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "kube-api-access-m26q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.314229 4865 scope.go:117] "RemoveContainer" containerID="9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.318418 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-854576c7c7-t47q8"] Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.369653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.369720 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.380852 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.398904 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26q5\" (UniqueName: \"kubernetes.io/projected/fe2a586d-4955-4b3d-8299-d5ea37cfe736-kube-api-access-m26q5\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.398929 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.404874 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405525 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405541 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405554 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405575 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405586 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-api" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405592 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-api" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405607 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="init" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405615 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="init" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405624 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="cinder-scheduler" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405631 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="cinder-scheduler" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405662 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-httpd" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405668 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-httpd" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405682 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405687 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405703 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="dnsmasq-dns" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405709 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="dnsmasq-dns" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405905 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405912 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.405924 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="probe" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.405930 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="probe" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406439 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="cinder-scheduler" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406472 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406483 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406493 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e13675-3a58-42c2-9236-eab676096763" containerName="horizon-log" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406501 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-httpd" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406511 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="70db55c6-255b-4aab-8d14-2675be446bfb" containerName="horizon" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406521 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" containerName="neutron-api" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406545 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" containerName="probe" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.406554 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" containerName="dnsmasq-dns" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.409067 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.415378 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.416108 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.432153 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config" (OuterVolumeSpecName: "config") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.470301 4865 scope.go:117] "RemoveContainer" containerID="30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.477018 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051\": container with ID starting with 30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051 not found: ID does not exist" containerID="30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.477071 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051"} err="failed to get container status \"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051\": rpc error: code = NotFound desc = could not find container \"30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051\": container with ID starting with 30280bea884d2ce49a1b8d6f4d99bd5172b8d99c30cd68be8341260c6551c051 not found: ID does not exist" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.477097 4865 scope.go:117] "RemoveContainer" containerID="9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa" Feb 16 23:05:11 crc kubenswrapper[4865]: E0216 23:05:11.483518 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa\": container with ID starting with 9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa not found: ID does not exist" containerID="9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.483576 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa"} err="failed to get container status \"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa\": rpc error: code = NotFound desc = could not find container \"9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa\": container with ID starting with 9a680096f680185ff0e639baac9b7bcf00410dc9b561dcf55f6fbe32ed6468fa not found: ID does not exist" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.487732 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.488686 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500811 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2dtk\" (UniqueName: \"kubernetes.io/projected/04210f96-20a4-48af-b1cb-f7ea73adc9a3-kube-api-access-j2dtk\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500847 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04210f96-20a4-48af-b1cb-f7ea73adc9a3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500885 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500925 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.500956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-scripts\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.501023 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.501033 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.501060 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.558985 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe2a586d-4955-4b3d-8299-d5ea37cfe736" (UID: "fe2a586d-4955-4b3d-8299-d5ea37cfe736"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602481 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2dtk\" (UniqueName: \"kubernetes.io/projected/04210f96-20a4-48af-b1cb-f7ea73adc9a3-kube-api-access-j2dtk\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602555 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04210f96-20a4-48af-b1cb-f7ea73adc9a3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.602900 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-scripts\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.603038 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe2a586d-4955-4b3d-8299-d5ea37cfe736-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.604708 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04210f96-20a4-48af-b1cb-f7ea73adc9a3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.613109 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.614210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.616634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-scripts\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.624952 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2dtk\" (UniqueName: \"kubernetes.io/projected/04210f96-20a4-48af-b1cb-f7ea73adc9a3-kube-api-access-j2dtk\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.633182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04210f96-20a4-48af-b1cb-f7ea73adc9a3-config-data\") pod \"cinder-scheduler-0\" (UID: \"04210f96-20a4-48af-b1cb-f7ea73adc9a3\") " pod="openstack/cinder-scheduler-0" Feb 16 23:05:11 crc kubenswrapper[4865]: I0216 23:05:11.749770 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.073893 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-ljkjc" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.126364 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.157937 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-ljkjc"] Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.382577 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.392574 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.442593 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a3ce49-d4ac-4627-b5cb-65305d115cef" path="/var/lib/kubelet/pods/08a3ce49-d4ac-4627-b5cb-65305d115cef/volumes" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.443436 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2129625-645f-49f5-8d84-9ebc29f478d9" path="/var/lib/kubelet/pods/b2129625-645f-49f5-8d84-9ebc29f478d9/volumes" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.447983 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2a586d-4955-4b3d-8299-d5ea37cfe736" path="/var/lib/kubelet/pods/fe2a586d-4955-4b3d-8299-d5ea37cfe736/volumes" Feb 16 23:05:12 crc kubenswrapper[4865]: I0216 23:05:12.810515 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:13 crc kubenswrapper[4865]: I0216 23:05:13.108330 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04210f96-20a4-48af-b1cb-f7ea73adc9a3","Type":"ContainerStarted","Data":"2ef2c5c0a63450565bcfad6f5ddeaf6c36f6da8dd0562ee15bc46a1a95af3598"} Feb 16 23:05:13 crc kubenswrapper[4865]: I0216 23:05:13.141674 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:14 crc kubenswrapper[4865]: I0216 23:05:14.055580 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:14 crc kubenswrapper[4865]: I0216 23:05:14.118246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04210f96-20a4-48af-b1cb-f7ea73adc9a3","Type":"ContainerStarted","Data":"e9189e7a8e8db89cf8dc6b598e25e875193c9495faab990fdb59520301162df7"} Feb 16 23:05:14 crc kubenswrapper[4865]: I0216 23:05:14.321099 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:14 crc kubenswrapper[4865]: I0216 23:05:14.815241 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.128822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04210f96-20a4-48af-b1cb-f7ea73adc9a3","Type":"ContainerStarted","Data":"4bb1b89ad7775518824c5d19a84dc5c863a0ab52cf826adccf8cdb4cc6b7ae78"} Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.154331 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.154312869 podStartE2EDuration="4.154312869s" podCreationTimestamp="2026-02-16 23:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:15.146530478 +0000 UTC m=+1155.470237439" watchObservedRunningTime="2026-02-16 23:05:15.154312869 +0000 UTC m=+1155.478019830" Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.664423 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.664500 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.664561 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.665458 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:05:15 crc kubenswrapper[4865]: I0216 23:05:15.665530 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0" gracePeriod=600 Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.141032 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0" exitCode=0 Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.141119 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0"} Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.141163 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2"} Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.141180 4865 scope.go:117] "RemoveContainer" containerID="01785d10a7bb373f66f6092d65fa6901ba6fc8e22f69baf647bf50d5be8dbeb3" Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.726441 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7ff854866d-9gv97" Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.750252 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.750929 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.781944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54bd7477c8-zrrzr" Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.813610 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.813882 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon-log" containerID="cri-o://5fc273bbb3c9c3f7431d58f42273042ec637806a09beb179e3b7c7fb8231c767" gracePeriod=30 Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.814320 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" containerID="cri-o://fe4078ad96c214d8c6f52173dba97bd712dac7fd5f98c905b528ed5de0c2e126" gracePeriod=30 Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.864865 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.865098 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f77f74cd-fjwcd" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api-log" containerID="cri-o://f2e0105bc78671b710b1b4c37f6e2c88c05fbf346b3b13433c002a5f41f3e344" gracePeriod=30 Feb 16 23:05:16 crc kubenswrapper[4865]: I0216 23:05:16.865523 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f77f74cd-fjwcd" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api" containerID="cri-o://4003425ce35786dec2edf740410beaa648ccc656f20188ed7f9f93c96392bb92" gracePeriod=30 Feb 16 23:05:17 crc kubenswrapper[4865]: I0216 23:05:17.171227 4865 generic.go:334] "Generic (PLEG): container finished" podID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerID="f2e0105bc78671b710b1b4c37f6e2c88c05fbf346b3b13433c002a5f41f3e344" exitCode=143 Feb 16 23:05:17 crc kubenswrapper[4865]: I0216 23:05:17.173293 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerDied","Data":"f2e0105bc78671b710b1b4c37f6e2c88c05fbf346b3b13433c002a5f41f3e344"} Feb 16 23:05:19 crc kubenswrapper[4865]: I0216 23:05:19.436316 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:05:19 crc kubenswrapper[4865]: I0216 23:05:19.446637 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5695f8dc4-jj7h5" Feb 16 23:05:19 crc kubenswrapper[4865]: I0216 23:05:19.665051 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-66c88cfbc7-mhfsh" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.203817 4865 generic.go:334] "Generic (PLEG): container finished" podID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerID="4003425ce35786dec2edf740410beaa648ccc656f20188ed7f9f93c96392bb92" exitCode=0 Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.203888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerDied","Data":"4003425ce35786dec2edf740410beaa648ccc656f20188ed7f9f93c96392bb92"} Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.206590 4865 generic.go:334] "Generic (PLEG): container finished" podID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerID="fe4078ad96c214d8c6f52173dba97bd712dac7fd5f98c905b528ed5de0c2e126" exitCode=0 Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.207382 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerDied","Data":"fe4078ad96c214d8c6f52173dba97bd712dac7fd5f98c905b528ed5de0c2e126"} Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.508919 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.570980 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 23:05:20 crc kubenswrapper[4865]: E0216 23:05:20.574042 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api-log" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.574131 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api-log" Feb 16 23:05:20 crc kubenswrapper[4865]: E0216 23:05:20.574210 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.574267 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.574545 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.574764 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api-log" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.575855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.582153 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.582228 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.588488 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hv6fs" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.604640 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639339 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle\") pod \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639406 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data\") pod \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639449 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs\") pod \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom\") pod \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639522 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zncp\" (UniqueName: \"kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp\") pod \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\" (UID: \"bafaf37c-b943-45c1-9a6f-3b3642a9471c\") " Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639681 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hfp\" (UniqueName: \"kubernetes.io/projected/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-kube-api-access-m4hfp\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639789 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-combined-ca-bundle\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.639841 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config-secret\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.650969 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs" (OuterVolumeSpecName: "logs") pod "bafaf37c-b943-45c1-9a6f-3b3642a9471c" (UID: "bafaf37c-b943-45c1-9a6f-3b3642a9471c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.657360 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bafaf37c-b943-45c1-9a6f-3b3642a9471c" (UID: "bafaf37c-b943-45c1-9a6f-3b3642a9471c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.658577 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp" (OuterVolumeSpecName: "kube-api-access-8zncp") pod "bafaf37c-b943-45c1-9a6f-3b3642a9471c" (UID: "bafaf37c-b943-45c1-9a6f-3b3642a9471c"). InnerVolumeSpecName "kube-api-access-8zncp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.722487 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bafaf37c-b943-45c1-9a6f-3b3642a9471c" (UID: "bafaf37c-b943-45c1-9a6f-3b3642a9471c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.736925 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data" (OuterVolumeSpecName: "config-data") pod "bafaf37c-b943-45c1-9a6f-3b3642a9471c" (UID: "bafaf37c-b943-45c1-9a6f-3b3642a9471c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-combined-ca-bundle\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config-secret\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741538 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hfp\" (UniqueName: \"kubernetes.io/projected/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-kube-api-access-m4hfp\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741654 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741676 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741688 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bafaf37c-b943-45c1-9a6f-3b3642a9471c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741700 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bafaf37c-b943-45c1-9a6f-3b3642a9471c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.741712 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zncp\" (UniqueName: \"kubernetes.io/projected/bafaf37c-b943-45c1-9a6f-3b3642a9471c-kube-api-access-8zncp\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.742397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.745710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-openstack-config-secret\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.745980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-combined-ca-bundle\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.764599 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hfp\" (UniqueName: \"kubernetes.io/projected/328cf5b9-9c5d-4cfa-ae62-1ab76d210788-kube-api-access-m4hfp\") pod \"openstackclient\" (UID: \"328cf5b9-9c5d-4cfa-ae62-1ab76d210788\") " pod="openstack/openstackclient" Feb 16 23:05:20 crc kubenswrapper[4865]: I0216 23:05:20.907748 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.216128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f77f74cd-fjwcd" event={"ID":"bafaf37c-b943-45c1-9a6f-3b3642a9471c","Type":"ContainerDied","Data":"28064323e15183dae5381ac19440fe6e30eed5e4cd50a61d11c3053afeb1634c"} Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.216568 4865 scope.go:117] "RemoveContainer" containerID="4003425ce35786dec2edf740410beaa648ccc656f20188ed7f9f93c96392bb92" Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.216432 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f77f74cd-fjwcd" Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.239566 4865 scope.go:117] "RemoveContainer" containerID="f2e0105bc78671b710b1b4c37f6e2c88c05fbf346b3b13433c002a5f41f3e344" Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.252725 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.261822 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f77f74cd-fjwcd"] Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.376578 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 23:05:21 crc kubenswrapper[4865]: W0216 23:05:21.377441 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod328cf5b9_9c5d_4cfa_ae62_1ab76d210788.slice/crio-51843d14cecd824d7bd0ef960721c4ddc5ef66cd42b94efde3327a47bd7841e1 WatchSource:0}: Error finding container 51843d14cecd824d7bd0ef960721c4ddc5ef66cd42b94efde3327a47bd7841e1: Status 404 returned error can't find the container with id 51843d14cecd824d7bd0ef960721c4ddc5ef66cd42b94efde3327a47bd7841e1 Feb 16 23:05:21 crc kubenswrapper[4865]: I0216 23:05:21.482445 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 16 23:05:22 crc kubenswrapper[4865]: I0216 23:05:22.081131 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 23:05:22 crc kubenswrapper[4865]: I0216 23:05:22.242734 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"328cf5b9-9c5d-4cfa-ae62-1ab76d210788","Type":"ContainerStarted","Data":"51843d14cecd824d7bd0ef960721c4ddc5ef66cd42b94efde3327a47bd7841e1"} Feb 16 23:05:22 crc kubenswrapper[4865]: I0216 23:05:22.433416 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" path="/var/lib/kubelet/pods/bafaf37c-b943-45c1-9a6f-3b3642a9471c/volumes" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.349368 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b5cb7cc4c-8d58d"] Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.351626 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.355798 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.355842 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.356241 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.371845 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b5cb7cc4c-8d58d"] Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.537721 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-log-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-run-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538048 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-internal-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538069 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-combined-ca-bundle\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538092 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-config-data\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-public-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-etc-swift\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.538175 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzq96\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-kube-api-access-nzq96\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640212 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-log-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640311 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-run-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-internal-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640368 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-combined-ca-bundle\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640398 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-config-data\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-public-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-etc-swift\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.640498 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzq96\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-kube-api-access-nzq96\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.641439 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-log-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.641732 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7143f0f-06af-4d75-960a-2488e9b131bc-run-httpd\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.648318 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-public-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.649590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-etc-swift\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.650060 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-internal-tls-certs\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.651818 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-config-data\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.660474 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7143f0f-06af-4d75-960a-2488e9b131bc-combined-ca-bundle\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.663223 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzq96\" (UniqueName: \"kubernetes.io/projected/a7143f0f-06af-4d75-960a-2488e9b131bc-kube-api-access-nzq96\") pod \"swift-proxy-6b5cb7cc4c-8d58d\" (UID: \"a7143f0f-06af-4d75-960a-2488e9b131bc\") " pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:24 crc kubenswrapper[4865]: I0216 23:05:24.733250 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.210107 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f77f74cd-fjwcd" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.210176 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f77f74cd-fjwcd" podUID="bafaf37c-b943-45c1-9a6f-3b3642a9471c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.348354 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.348899 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-central-agent" containerID="cri-o://b3f140d521598886fc77d9042b317f3d7b0c90b578fe789b1a3567c8a774e1e7" gracePeriod=30 Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.349025 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="proxy-httpd" containerID="cri-o://3292a7a1840a5a37f62b68eb283043567581fc03c57c17aeaf3480e5efb1fca8" gracePeriod=30 Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.349066 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="sg-core" containerID="cri-o://12b956c3e0b770f84fd66d1523afae9683cb23fdf2697a095dae25f90d47b79d" gracePeriod=30 Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.349098 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-notification-agent" containerID="cri-o://73f69acc01c940e785cc2e29b00b83dbfbe2f79c5a7893b5582c36c56e57b007" gracePeriod=30 Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.369129 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 23:05:25 crc kubenswrapper[4865]: I0216 23:05:25.426176 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b5cb7cc4c-8d58d"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298024 4865 generic.go:334] "Generic (PLEG): container finished" podID="6482e483-94d0-4ad1-9893-2dba5d006def" containerID="3292a7a1840a5a37f62b68eb283043567581fc03c57c17aeaf3480e5efb1fca8" exitCode=0 Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298439 4865 generic.go:334] "Generic (PLEG): container finished" podID="6482e483-94d0-4ad1-9893-2dba5d006def" containerID="12b956c3e0b770f84fd66d1523afae9683cb23fdf2697a095dae25f90d47b79d" exitCode=2 Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298447 4865 generic.go:334] "Generic (PLEG): container finished" podID="6482e483-94d0-4ad1-9893-2dba5d006def" containerID="b3f140d521598886fc77d9042b317f3d7b0c90b578fe789b1a3567c8a774e1e7" exitCode=0 Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerDied","Data":"3292a7a1840a5a37f62b68eb283043567581fc03c57c17aeaf3480e5efb1fca8"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298516 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerDied","Data":"12b956c3e0b770f84fd66d1523afae9683cb23fdf2697a095dae25f90d47b79d"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.298527 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerDied","Data":"b3f140d521598886fc77d9042b317f3d7b0c90b578fe789b1a3567c8a774e1e7"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.300384 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hzhxn"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.301855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.305247 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" event={"ID":"a7143f0f-06af-4d75-960a-2488e9b131bc","Type":"ContainerStarted","Data":"998685c73b9841a631ecbbd960afeac1ed05064c9aea600c300b9864da2ebfe1"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.305304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" event={"ID":"a7143f0f-06af-4d75-960a-2488e9b131bc","Type":"ContainerStarted","Data":"f48e8127912f4528c9d20e5401d9008279c8f22b5c3ba68e6faa8e8e6a852aef"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.305315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" event={"ID":"a7143f0f-06af-4d75-960a-2488e9b131bc","Type":"ContainerStarted","Data":"88b90bb30b5c84b85da877fd9189d95a2dea22ade7901e0d78bef1c9948f7ace"} Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.305597 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.334535 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hzhxn"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.348526 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" podStartSLOduration=2.348509925 podStartE2EDuration="2.348509925s" podCreationTimestamp="2026-02-16 23:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:26.342404021 +0000 UTC m=+1166.666111002" watchObservedRunningTime="2026-02-16 23:05:26.348509925 +0000 UTC m=+1166.672216886" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.380441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6px\" (UniqueName: \"kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.380570 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.406483 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-szrzm"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.407651 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.439510 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-szrzm"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.446864 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-993e-account-create-update-dfp4n"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.447986 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.450318 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.457468 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-993e-account-create-update-dfp4n"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.482023 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.483118 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6px\" (UniqueName: \"kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.488485 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.515334 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ss76d"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.516499 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6px\" (UniqueName: \"kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px\") pod \"nova-api-db-create-hzhxn\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.517094 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.524365 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ss76d"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.591321 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.591483 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.591511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpspq\" (UniqueName: \"kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.591548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2m98\" (UniqueName: \"kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.597997 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5de9-account-create-update-q7nmp"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.599220 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.633204 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.644384 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5de9-account-create-update-q7nmp"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.649132 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.692997 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693202 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jzc\" (UniqueName: \"kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693296 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpspq\" (UniqueName: \"kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66grr\" (UniqueName: \"kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693368 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693395 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.693425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2m98\" (UniqueName: \"kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.694113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.694554 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.711768 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpspq\" (UniqueName: \"kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq\") pod \"nova-cell0-db-create-szrzm\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.712866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2m98\" (UniqueName: \"kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98\") pod \"nova-api-993e-account-create-update-dfp4n\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.733537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.795547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jzc\" (UniqueName: \"kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.795601 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66grr\" (UniqueName: \"kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.795622 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.795640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.796875 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.797451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.798679 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.802072 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-989f-account-create-update-l8qhk"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.803216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.805170 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.811780 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-989f-account-create-update-l8qhk"] Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.820663 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66grr\" (UniqueName: \"kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr\") pod \"nova-cell0-5de9-account-create-update-q7nmp\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.829244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jzc\" (UniqueName: \"kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc\") pod \"nova-cell1-db-create-ss76d\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.873589 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.897427 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvtz\" (UniqueName: \"kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.897548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.922539 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.999449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:26 crc kubenswrapper[4865]: I0216 23:05:26.999568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvtz\" (UniqueName: \"kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:27 crc kubenswrapper[4865]: I0216 23:05:27.000196 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:27 crc kubenswrapper[4865]: I0216 23:05:27.016450 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvtz\" (UniqueName: \"kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz\") pod \"nova-cell1-989f-account-create-update-l8qhk\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:27 crc kubenswrapper[4865]: I0216 23:05:27.215385 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:27 crc kubenswrapper[4865]: I0216 23:05:27.313172 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:30 crc kubenswrapper[4865]: I0216 23:05:30.350245 4865 generic.go:334] "Generic (PLEG): container finished" podID="6482e483-94d0-4ad1-9893-2dba5d006def" containerID="73f69acc01c940e785cc2e29b00b83dbfbe2f79c5a7893b5582c36c56e57b007" exitCode=0 Feb 16 23:05:30 crc kubenswrapper[4865]: I0216 23:05:30.350380 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerDied","Data":"73f69acc01c940e785cc2e29b00b83dbfbe2f79c5a7893b5582c36c56e57b007"} Feb 16 23:05:31 crc kubenswrapper[4865]: I0216 23:05:31.482370 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.182984 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334029 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334306 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd45v\" (UniqueName: \"kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334375 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334416 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334444 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle\") pod \"6482e483-94d0-4ad1-9893-2dba5d006def\" (UID: \"6482e483-94d0-4ad1-9893-2dba5d006def\") " Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.334825 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.338079 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.341072 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts" (OuterVolumeSpecName: "scripts") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.341485 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v" (OuterVolumeSpecName: "kube-api-access-zd45v") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "kube-api-access-zd45v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.385399 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.387831 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6482e483-94d0-4ad1-9893-2dba5d006def","Type":"ContainerDied","Data":"0bc6ec39a68b03ab090891c1e8767244b7785c6a8ba1f611baa7b673deb72d85"} Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.388077 4865 scope.go:117] "RemoveContainer" containerID="3292a7a1840a5a37f62b68eb283043567581fc03c57c17aeaf3480e5efb1fca8" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.388220 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.438697 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.438728 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.438738 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6482e483-94d0-4ad1-9893-2dba5d006def-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.438747 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd45v\" (UniqueName: \"kubernetes.io/projected/6482e483-94d0-4ad1-9893-2dba5d006def-kube-api-access-zd45v\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.438756 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.462469 4865 scope.go:117] "RemoveContainer" containerID="12b956c3e0b770f84fd66d1523afae9683cb23fdf2697a095dae25f90d47b79d" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.471958 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.487405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data" (OuterVolumeSpecName: "config-data") pod "6482e483-94d0-4ad1-9893-2dba5d006def" (UID: "6482e483-94d0-4ad1-9893-2dba5d006def"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.488561 4865 scope.go:117] "RemoveContainer" containerID="73f69acc01c940e785cc2e29b00b83dbfbe2f79c5a7893b5582c36c56e57b007" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.541006 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.541033 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6482e483-94d0-4ad1-9893-2dba5d006def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.543364 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hzhxn"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.555597 4865 scope.go:117] "RemoveContainer" containerID="b3f140d521598886fc77d9042b317f3d7b0c90b578fe789b1a3567c8a774e1e7" Feb 16 23:05:32 crc kubenswrapper[4865]: W0216 23:05:32.563673 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3649fa7_8af6_4f1a_9bc1_b5c10d874d4a.slice/crio-3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29 WatchSource:0}: Error finding container 3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29: Status 404 returned error can't find the container with id 3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29 Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.692345 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5de9-account-create-update-q7nmp"] Feb 16 23:05:32 crc kubenswrapper[4865]: W0216 23:05:32.710585 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35227f0d_ff5a_4c4c_a160_35a7743d4ca2.slice/crio-190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77 WatchSource:0}: Error finding container 190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77: Status 404 returned error can't find the container with id 190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77 Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.907345 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.922320 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.949392 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-993e-account-create-update-dfp4n"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.959156 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-989f-account-create-update-l8qhk"] Feb 16 23:05:32 crc kubenswrapper[4865]: W0216 23:05:32.966905 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ca4e99c_e5c8_49f6_bf1b_dd6b73576b54.slice/crio-b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611 WatchSource:0}: Error finding container b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611: Status 404 returned error can't find the container with id b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611 Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.968228 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-szrzm"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.978441 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:32 crc kubenswrapper[4865]: E0216 23:05:32.978867 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="sg-core" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.978918 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="sg-core" Feb 16 23:05:32 crc kubenswrapper[4865]: E0216 23:05:32.978938 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-notification-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.978944 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-notification-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: E0216 23:05:32.978972 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="proxy-httpd" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.978979 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="proxy-httpd" Feb 16 23:05:32 crc kubenswrapper[4865]: E0216 23:05:32.978990 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-central-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.978996 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-central-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.979182 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="sg-core" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.979195 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-central-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.979208 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="proxy-httpd" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.979229 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" containerName="ceilometer-notification-agent" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.982094 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.985048 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.986101 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ss76d"] Feb 16 23:05:32 crc kubenswrapper[4865]: I0216 23:05:32.986379 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:32.993397 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052343 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052576 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmn2h\" (UniqueName: \"kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052618 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052718 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052736 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.052760 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154123 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154184 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmn2h\" (UniqueName: \"kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154236 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154258 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.154367 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.155546 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.155644 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.160698 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.161635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.161738 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.164118 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.168481 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.172488 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmn2h\" (UniqueName: \"kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h\") pod \"ceilometer-0\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.255902 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.255999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256134 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256251 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256293 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92mmq\" (UniqueName: \"kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq\") pod \"426c422a-fcd9-4686-98fd-f02bfb76d624\" (UID: \"426c422a-fcd9-4686-98fd-f02bfb76d624\") " Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256314 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.256638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs" (OuterVolumeSpecName: "logs") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.257012 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/426c422a-fcd9-4686-98fd-f02bfb76d624-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.257037 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426c422a-fcd9-4686-98fd-f02bfb76d624-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.266968 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.268643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq" (OuterVolumeSpecName: "kube-api-access-92mmq") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "kube-api-access-92mmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.270988 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts" (OuterVolumeSpecName: "scripts") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.358732 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92mmq\" (UniqueName: \"kubernetes.io/projected/426c422a-fcd9-4686-98fd-f02bfb76d624-kube-api-access-92mmq\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.358758 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.358773 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.408760 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"328cf5b9-9c5d-4cfa-ae62-1ab76d210788","Type":"ContainerStarted","Data":"afb5482992165d6c30b72650d800bdd2f3cb514803f912d409c747ab48a2b81e"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.415580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-993e-account-create-update-dfp4n" event={"ID":"adff2e0d-89a3-423c-8e83-a16b64c67a82","Type":"ContainerStarted","Data":"731001ba5fc6d780f7537d9c40d65cd0e160502dce37324e02c0aa3b30036b57"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.415623 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-993e-account-create-update-dfp4n" event={"ID":"adff2e0d-89a3-423c-8e83-a16b64c67a82","Type":"ContainerStarted","Data":"9b0a2a3ccc683ffbc52c057556eb4ec811c66ba912b7f2773e1e057c3514c1b0"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.418486 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" event={"ID":"d03aa28b-05a9-4123-a616-c1713e81c63c","Type":"ContainerStarted","Data":"bc1d544410058ad3864770b8bb36697871ea409cb09ca09067eb2b1e5b265782"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.418535 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" event={"ID":"d03aa28b-05a9-4123-a616-c1713e81c63c","Type":"ContainerStarted","Data":"d867ddf821cbd4a66beb99601d027f4bc48ba6f2880acb04d3725affbb52d67c"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.420913 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.422954 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-szrzm" event={"ID":"0e310dc7-ddd2-4a29-97a2-b071095d9966","Type":"ContainerStarted","Data":"8c1b8e6a1d1f0a28fbae98382c6971880cf44620c0e3fc79dde5ca84de2d728b"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.422991 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-szrzm" event={"ID":"0e310dc7-ddd2-4a29-97a2-b071095d9966","Type":"ContainerStarted","Data":"3397234d5c46109e8b6d3abfd5796acca53eaaa11f01525ca0086b2a4bc81279"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.426174 4865 generic.go:334] "Generic (PLEG): container finished" podID="f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" containerID="19453e668301688ebe1519413da8fe166ac42e2b4ed73371e826413d7617e86c" exitCode=0 Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.426231 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hzhxn" event={"ID":"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a","Type":"ContainerDied","Data":"19453e668301688ebe1519413da8fe166ac42e2b4ed73371e826413d7617e86c"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.426252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hzhxn" event={"ID":"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a","Type":"ContainerStarted","Data":"3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.437919 4865 generic.go:334] "Generic (PLEG): container finished" podID="35227f0d-ff5a-4c4c-a160-35a7743d4ca2" containerID="94db81abead0c9d2365141696b1ef28cd4daba518c77da221a90a97fd98fd3df" exitCode=0 Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.438100 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" event={"ID":"35227f0d-ff5a-4c4c-a160-35a7743d4ca2","Type":"ContainerDied","Data":"94db81abead0c9d2365141696b1ef28cd4daba518c77da221a90a97fd98fd3df"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.438129 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" event={"ID":"35227f0d-ff5a-4c4c-a160-35a7743d4ca2","Type":"ContainerStarted","Data":"190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.440311 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.879865856 podStartE2EDuration="13.440287461s" podCreationTimestamp="2026-02-16 23:05:20 +0000 UTC" firstStartedPulling="2026-02-16 23:05:21.379957855 +0000 UTC m=+1161.703664816" lastFinishedPulling="2026-02-16 23:05:31.94037946 +0000 UTC m=+1172.264086421" observedRunningTime="2026-02-16 23:05:33.432699045 +0000 UTC m=+1173.756406006" watchObservedRunningTime="2026-02-16 23:05:33.440287461 +0000 UTC m=+1173.763994422" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.448685 4865 generic.go:334] "Generic (PLEG): container finished" podID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerID="7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1" exitCode=137 Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.448760 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerDied","Data":"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.448788 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"426c422a-fcd9-4686-98fd-f02bfb76d624","Type":"ContainerDied","Data":"9452eae6c7bfe22044de3e434b85417b0409d7281ed879e7786fe2844b5eb058"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.448806 4865 scope.go:117] "RemoveContainer" containerID="7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.448924 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.453617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data" (OuterVolumeSpecName: "config-data") pod "426c422a-fcd9-4686-98fd-f02bfb76d624" (UID: "426c422a-fcd9-4686-98fd-f02bfb76d624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.454291 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.454846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ss76d" event={"ID":"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54","Type":"ContainerStarted","Data":"b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611"} Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.457087 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" podStartSLOduration=7.457062427 podStartE2EDuration="7.457062427s" podCreationTimestamp="2026-02-16 23:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:33.450382797 +0000 UTC m=+1173.774089758" watchObservedRunningTime="2026-02-16 23:05:33.457062427 +0000 UTC m=+1173.780769388" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.462265 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.462315 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426c422a-fcd9-4686-98fd-f02bfb76d624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.490601 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-szrzm" podStartSLOduration=7.490579829 podStartE2EDuration="7.490579829s" podCreationTimestamp="2026-02-16 23:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:33.486711549 +0000 UTC m=+1173.810418510" watchObservedRunningTime="2026-02-16 23:05:33.490579829 +0000 UTC m=+1173.814286790" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.510498 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-993e-account-create-update-dfp4n" podStartSLOduration=7.510475744 podStartE2EDuration="7.510475744s" podCreationTimestamp="2026-02-16 23:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:33.500788069 +0000 UTC m=+1173.824495040" watchObservedRunningTime="2026-02-16 23:05:33.510475744 +0000 UTC m=+1173.834182705" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.516144 4865 scope.go:117] "RemoveContainer" containerID="08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.535447 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ss76d" podStartSLOduration=7.535425913 podStartE2EDuration="7.535425913s" podCreationTimestamp="2026-02-16 23:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:33.529185776 +0000 UTC m=+1173.852892737" watchObservedRunningTime="2026-02-16 23:05:33.535425913 +0000 UTC m=+1173.859132894" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.556257 4865 scope.go:117] "RemoveContainer" containerID="7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1" Feb 16 23:05:33 crc kubenswrapper[4865]: E0216 23:05:33.562016 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1\": container with ID starting with 7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1 not found: ID does not exist" containerID="7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.562060 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1"} err="failed to get container status \"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1\": rpc error: code = NotFound desc = could not find container \"7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1\": container with ID starting with 7e5e9cbc26f05951ed3e12e28152e611ef236825d33ebb87921e51a0c3a0c4f1 not found: ID does not exist" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.562089 4865 scope.go:117] "RemoveContainer" containerID="08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b" Feb 16 23:05:33 crc kubenswrapper[4865]: E0216 23:05:33.562464 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b\": container with ID starting with 08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b not found: ID does not exist" containerID="08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.562490 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b"} err="failed to get container status \"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b\": rpc error: code = NotFound desc = could not find container \"08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b\": container with ID starting with 08154bdddb58c2410db30293715d87f384be29626564eb0a0517cbccc8dcec9b not found: ID does not exist" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.956681 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.965481 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.986211 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:33 crc kubenswrapper[4865]: E0216 23:05:33.986615 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.986626 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api" Feb 16 23:05:33 crc kubenswrapper[4865]: E0216 23:05:33.986660 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api-log" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.986666 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api-log" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.986835 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.986952 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" containerName="cinder-api-log" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.987854 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.992986 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.993730 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 23:05:33 crc kubenswrapper[4865]: I0216 23:05:33.993752 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.008590 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.052082 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.081553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.081619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.081713 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.081874 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.081949 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-logs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.082034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj5g\" (UniqueName: \"kubernetes.io/projected/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-kube-api-access-njj5g\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.082072 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.082103 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.082140 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-scripts\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.107553 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84f9dbdcc7-p5njv" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.168985 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.169204 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84797ccfbd-g57dm" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-api" containerID="cri-o://ebc1ca05b742248ab2abb9e3cb831650c42f8a4420b33d1fb1cb2e924737d5b9" gracePeriod=30 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.169596 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84797ccfbd-g57dm" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-httpd" containerID="cri-o://135488e7c401818eb30cf0f3365f3c5c1fcf7f94baab337cc2e52d7cd7c76d75" gracePeriod=30 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.184011 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185521 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185572 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-logs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj5g\" (UniqueName: \"kubernetes.io/projected/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-kube-api-access-njj5g\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185770 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185806 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185864 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-scripts\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.185987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.186027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.186054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.186729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-logs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.201714 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-scripts\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.201736 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.201775 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.202117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.202215 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.202527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-config-data\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.224244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj5g\" (UniqueName: \"kubernetes.io/projected/ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5-kube-api-access-njj5g\") pod \"cinder-api-0\" (UID: \"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5\") " pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.333931 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.443599 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426c422a-fcd9-4686-98fd-f02bfb76d624" path="/var/lib/kubelet/pods/426c422a-fcd9-4686-98fd-f02bfb76d624/volumes" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.444732 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6482e483-94d0-4ad1-9893-2dba5d006def" path="/var/lib/kubelet/pods/6482e483-94d0-4ad1-9893-2dba5d006def/volumes" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.548903 4865 generic.go:334] "Generic (PLEG): container finished" podID="adff2e0d-89a3-423c-8e83-a16b64c67a82" containerID="731001ba5fc6d780f7537d9c40d65cd0e160502dce37324e02c0aa3b30036b57" exitCode=0 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.549200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-993e-account-create-update-dfp4n" event={"ID":"adff2e0d-89a3-423c-8e83-a16b64c67a82","Type":"ContainerDied","Data":"731001ba5fc6d780f7537d9c40d65cd0e160502dce37324e02c0aa3b30036b57"} Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.555868 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerStarted","Data":"4fe9cc28af81fd95879a4eb17169847fc6e4274fc3fd267c5ecac902a7e96660"} Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.569287 4865 generic.go:334] "Generic (PLEG): container finished" podID="d03aa28b-05a9-4123-a616-c1713e81c63c" containerID="bc1d544410058ad3864770b8bb36697871ea409cb09ca09067eb2b1e5b265782" exitCode=0 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.569878 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" event={"ID":"d03aa28b-05a9-4123-a616-c1713e81c63c","Type":"ContainerDied","Data":"bc1d544410058ad3864770b8bb36697871ea409cb09ca09067eb2b1e5b265782"} Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.581145 4865 generic.go:334] "Generic (PLEG): container finished" podID="0e310dc7-ddd2-4a29-97a2-b071095d9966" containerID="8c1b8e6a1d1f0a28fbae98382c6971880cf44620c0e3fc79dde5ca84de2d728b" exitCode=0 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.581261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-szrzm" event={"ID":"0e310dc7-ddd2-4a29-97a2-b071095d9966","Type":"ContainerDied","Data":"8c1b8e6a1d1f0a28fbae98382c6971880cf44620c0e3fc79dde5ca84de2d728b"} Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.583931 4865 generic.go:334] "Generic (PLEG): container finished" podID="5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" containerID="fc3c3b541452931c7597b37d3c344fc52e8df0db452a421e657423d6cb9a1682" exitCode=0 Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.584019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ss76d" event={"ID":"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54","Type":"ContainerDied","Data":"fc3c3b541452931c7597b37d3c344fc52e8df0db452a421e657423d6cb9a1682"} Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.747797 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.766914 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b5cb7cc4c-8d58d" Feb 16 23:05:34 crc kubenswrapper[4865]: I0216 23:05:34.993657 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 23:05:35 crc kubenswrapper[4865]: W0216 23:05:35.045945 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce76a8fb_fb3b_4af8_a4aa_d8ae4e31a5c5.slice/crio-67f73e347098f109b31956754fe5487b48aeff7af5b79b3794c6bc3f2b82fd20 WatchSource:0}: Error finding container 67f73e347098f109b31956754fe5487b48aeff7af5b79b3794c6bc3f2b82fd20: Status 404 returned error can't find the container with id 67f73e347098f109b31956754fe5487b48aeff7af5b79b3794c6bc3f2b82fd20 Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.107981 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.168953 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.266168 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts\") pod \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.266259 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts\") pod \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.266293 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6px\" (UniqueName: \"kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px\") pod \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\" (UID: \"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a\") " Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.266316 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66grr\" (UniqueName: \"kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr\") pod \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\" (UID: \"35227f0d-ff5a-4c4c-a160-35a7743d4ca2\") " Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.267716 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" (UID: "f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.267740 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35227f0d-ff5a-4c4c-a160-35a7743d4ca2" (UID: "35227f0d-ff5a-4c4c-a160-35a7743d4ca2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.272801 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px" (OuterVolumeSpecName: "kube-api-access-5g6px") pod "f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" (UID: "f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a"). InnerVolumeSpecName "kube-api-access-5g6px". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.272899 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr" (OuterVolumeSpecName: "kube-api-access-66grr") pod "35227f0d-ff5a-4c4c-a160-35a7743d4ca2" (UID: "35227f0d-ff5a-4c4c-a160-35a7743d4ca2"). InnerVolumeSpecName "kube-api-access-66grr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.370506 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.370534 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.370545 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6px\" (UniqueName: \"kubernetes.io/projected/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a-kube-api-access-5g6px\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.370556 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66grr\" (UniqueName: \"kubernetes.io/projected/35227f0d-ff5a-4c4c-a160-35a7743d4ca2-kube-api-access-66grr\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.600657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerDied","Data":"135488e7c401818eb30cf0f3365f3c5c1fcf7f94baab337cc2e52d7cd7c76d75"} Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.600602 4865 generic.go:334] "Generic (PLEG): container finished" podID="624b466e-6c64-454c-8f81-636a035d9903" containerID="135488e7c401818eb30cf0f3365f3c5c1fcf7f94baab337cc2e52d7cd7c76d75" exitCode=0 Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.603024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hzhxn" event={"ID":"f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a","Type":"ContainerDied","Data":"3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29"} Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.603051 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3853a619e362779dc38a34aa8bd514dd0003d9e95f664178ed3421bb673fcc29" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.603084 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hzhxn" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.606850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" event={"ID":"35227f0d-ff5a-4c4c-a160-35a7743d4ca2","Type":"ContainerDied","Data":"190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77"} Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.606883 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="190e89f8504787178465e72186965069ddb7399e2b77c2958527f20ce1d98b77" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.606888 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5de9-account-create-update-q7nmp" Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.609457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5","Type":"ContainerStarted","Data":"67f73e347098f109b31956754fe5487b48aeff7af5b79b3794c6bc3f2b82fd20"} Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.612641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerStarted","Data":"44de764bcd45311b0a8abc41066494c7043b31745f2795b93071da2e20ec61f8"} Feb 16 23:05:35 crc kubenswrapper[4865]: I0216 23:05:35.991901 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.197236 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jzc\" (UniqueName: \"kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc\") pod \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.197407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts\") pod \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\" (UID: \"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.198567 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" (UID: "5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.207539 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc" (OuterVolumeSpecName: "kube-api-access-h5jzc") pod "5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" (UID: "5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54"). InnerVolumeSpecName "kube-api-access-h5jzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.248191 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.254046 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.258585 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.301806 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts\") pod \"0e310dc7-ddd2-4a29-97a2-b071095d9966\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.301936 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts\") pod \"adff2e0d-89a3-423c-8e83-a16b64c67a82\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.301988 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts\") pod \"d03aa28b-05a9-4123-a616-c1713e81c63c\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302106 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2m98\" (UniqueName: \"kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98\") pod \"adff2e0d-89a3-423c-8e83-a16b64c67a82\" (UID: \"adff2e0d-89a3-423c-8e83-a16b64c67a82\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302175 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvtz\" (UniqueName: \"kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz\") pod \"d03aa28b-05a9-4123-a616-c1713e81c63c\" (UID: \"d03aa28b-05a9-4123-a616-c1713e81c63c\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302241 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpspq\" (UniqueName: \"kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq\") pod \"0e310dc7-ddd2-4a29-97a2-b071095d9966\" (UID: \"0e310dc7-ddd2-4a29-97a2-b071095d9966\") " Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302365 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e310dc7-ddd2-4a29-97a2-b071095d9966" (UID: "0e310dc7-ddd2-4a29-97a2-b071095d9966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302616 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e310dc7-ddd2-4a29-97a2-b071095d9966-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302633 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jzc\" (UniqueName: \"kubernetes.io/projected/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-kube-api-access-h5jzc\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302644 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.302713 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d03aa28b-05a9-4123-a616-c1713e81c63c" (UID: "d03aa28b-05a9-4123-a616-c1713e81c63c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.303041 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adff2e0d-89a3-423c-8e83-a16b64c67a82" (UID: "adff2e0d-89a3-423c-8e83-a16b64c67a82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.307705 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98" (OuterVolumeSpecName: "kube-api-access-r2m98") pod "adff2e0d-89a3-423c-8e83-a16b64c67a82" (UID: "adff2e0d-89a3-423c-8e83-a16b64c67a82"). InnerVolumeSpecName "kube-api-access-r2m98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.309914 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq" (OuterVolumeSpecName: "kube-api-access-cpspq") pod "0e310dc7-ddd2-4a29-97a2-b071095d9966" (UID: "0e310dc7-ddd2-4a29-97a2-b071095d9966"). InnerVolumeSpecName "kube-api-access-cpspq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.314724 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz" (OuterVolumeSpecName: "kube-api-access-4kvtz") pod "d03aa28b-05a9-4123-a616-c1713e81c63c" (UID: "d03aa28b-05a9-4123-a616-c1713e81c63c"). InnerVolumeSpecName "kube-api-access-4kvtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.404214 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adff2e0d-89a3-423c-8e83-a16b64c67a82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.404252 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d03aa28b-05a9-4123-a616-c1713e81c63c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.404262 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2m98\" (UniqueName: \"kubernetes.io/projected/adff2e0d-89a3-423c-8e83-a16b64c67a82-kube-api-access-r2m98\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.404286 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvtz\" (UniqueName: \"kubernetes.io/projected/d03aa28b-05a9-4123-a616-c1713e81c63c-kube-api-access-4kvtz\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.404301 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpspq\" (UniqueName: \"kubernetes.io/projected/0e310dc7-ddd2-4a29-97a2-b071095d9966-kube-api-access-cpspq\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.633069 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerStarted","Data":"d7876d96fc0ee57fcee7bcbe39a681143f2428f1cc2469e649cc5f6bb0f65a15"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.633116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerStarted","Data":"6ff9c038a037f073a5dc34af1b8fa08bc8a400cb98a375ac4b48abe489332a59"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.636938 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-szrzm" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.637663 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-szrzm" event={"ID":"0e310dc7-ddd2-4a29-97a2-b071095d9966","Type":"ContainerDied","Data":"3397234d5c46109e8b6d3abfd5796acca53eaaa11f01525ca0086b2a4bc81279"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.637685 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3397234d5c46109e8b6d3abfd5796acca53eaaa11f01525ca0086b2a4bc81279" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.640016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ss76d" event={"ID":"5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54","Type":"ContainerDied","Data":"b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.640038 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9813489ded744ff13731c96a2488c415a430fb43a494c249a560c1499bfd611" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.640085 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ss76d" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.642249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5","Type":"ContainerStarted","Data":"4b96d24debfb139c51c9d3ccd3146c4a49e58103d93eb0e7af1ba951d44637f2"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.644729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-993e-account-create-update-dfp4n" event={"ID":"adff2e0d-89a3-423c-8e83-a16b64c67a82","Type":"ContainerDied","Data":"9b0a2a3ccc683ffbc52c057556eb4ec811c66ba912b7f2773e1e057c3514c1b0"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.645022 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0a2a3ccc683ffbc52c057556eb4ec811c66ba912b7f2773e1e057c3514c1b0" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.645076 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-993e-account-create-update-dfp4n" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.648267 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" event={"ID":"d03aa28b-05a9-4123-a616-c1713e81c63c","Type":"ContainerDied","Data":"d867ddf821cbd4a66beb99601d027f4bc48ba6f2880acb04d3725affbb52d67c"} Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.648323 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d867ddf821cbd4a66beb99601d027f4bc48ba6f2880acb04d3725affbb52d67c" Feb 16 23:05:36 crc kubenswrapper[4865]: I0216 23:05:36.648368 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-989f-account-create-update-l8qhk" Feb 16 23:05:37 crc kubenswrapper[4865]: I0216 23:05:37.461358 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:37 crc kubenswrapper[4865]: I0216 23:05:37.659601 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5","Type":"ContainerStarted","Data":"622d761e0c40093f5c7f54878689d4cbf2377534faef48fbda3fc95a63d31ed8"} Feb 16 23:05:37 crc kubenswrapper[4865]: I0216 23:05:37.659868 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 23:05:37 crc kubenswrapper[4865]: I0216 23:05:37.686510 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.686490254 podStartE2EDuration="4.686490254s" podCreationTimestamp="2026-02-16 23:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:05:37.679401203 +0000 UTC m=+1178.003108164" watchObservedRunningTime="2026-02-16 23:05:37.686490254 +0000 UTC m=+1178.010197215" Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.672639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerStarted","Data":"d8459476ab7cf9f5c1ba03f6584631312b40ed8053f6a11d849948f5622f1d6c"} Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.672769 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-central-agent" containerID="cri-o://44de764bcd45311b0a8abc41066494c7043b31745f2795b93071da2e20ec61f8" gracePeriod=30 Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.673203 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="proxy-httpd" containerID="cri-o://d8459476ab7cf9f5c1ba03f6584631312b40ed8053f6a11d849948f5622f1d6c" gracePeriod=30 Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.673262 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="sg-core" containerID="cri-o://6ff9c038a037f073a5dc34af1b8fa08bc8a400cb98a375ac4b48abe489332a59" gracePeriod=30 Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.673361 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-notification-agent" containerID="cri-o://d7876d96fc0ee57fcee7bcbe39a681143f2428f1cc2469e649cc5f6bb0f65a15" gracePeriod=30 Feb 16 23:05:38 crc kubenswrapper[4865]: I0216 23:05:38.697740 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.308276487 podStartE2EDuration="6.697713906s" podCreationTimestamp="2026-02-16 23:05:32 +0000 UTC" firstStartedPulling="2026-02-16 23:05:34.044819931 +0000 UTC m=+1174.368526892" lastFinishedPulling="2026-02-16 23:05:37.43425735 +0000 UTC m=+1177.757964311" observedRunningTime="2026-02-16 23:05:38.692245731 +0000 UTC m=+1179.015952692" watchObservedRunningTime="2026-02-16 23:05:38.697713906 +0000 UTC m=+1179.021420887" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.687043 4865 generic.go:334] "Generic (PLEG): container finished" podID="624b466e-6c64-454c-8f81-636a035d9903" containerID="ebc1ca05b742248ab2abb9e3cb831650c42f8a4420b33d1fb1cb2e924737d5b9" exitCode=0 Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.687612 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerDied","Data":"ebc1ca05b742248ab2abb9e3cb831650c42f8a4420b33d1fb1cb2e924737d5b9"} Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.691900 4865 generic.go:334] "Generic (PLEG): container finished" podID="63a51531-7a38-4696-b1c9-806551f65cc8" containerID="d8459476ab7cf9f5c1ba03f6584631312b40ed8053f6a11d849948f5622f1d6c" exitCode=0 Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.691934 4865 generic.go:334] "Generic (PLEG): container finished" podID="63a51531-7a38-4696-b1c9-806551f65cc8" containerID="6ff9c038a037f073a5dc34af1b8fa08bc8a400cb98a375ac4b48abe489332a59" exitCode=2 Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.691945 4865 generic.go:334] "Generic (PLEG): container finished" podID="63a51531-7a38-4696-b1c9-806551f65cc8" containerID="d7876d96fc0ee57fcee7bcbe39a681143f2428f1cc2469e649cc5f6bb0f65a15" exitCode=0 Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.691966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerDied","Data":"d8459476ab7cf9f5c1ba03f6584631312b40ed8053f6a11d849948f5622f1d6c"} Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.691996 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerDied","Data":"6ff9c038a037f073a5dc34af1b8fa08bc8a400cb98a375ac4b48abe489332a59"} Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.692011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerDied","Data":"d7876d96fc0ee57fcee7bcbe39a681143f2428f1cc2469e649cc5f6bb0f65a15"} Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.736057 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.874760 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d274d\" (UniqueName: \"kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d\") pod \"624b466e-6c64-454c-8f81-636a035d9903\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.874865 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config\") pod \"624b466e-6c64-454c-8f81-636a035d9903\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.875031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle\") pod \"624b466e-6c64-454c-8f81-636a035d9903\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.875139 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config\") pod \"624b466e-6c64-454c-8f81-636a035d9903\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.875180 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs\") pod \"624b466e-6c64-454c-8f81-636a035d9903\" (UID: \"624b466e-6c64-454c-8f81-636a035d9903\") " Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.881045 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "624b466e-6c64-454c-8f81-636a035d9903" (UID: "624b466e-6c64-454c-8f81-636a035d9903"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.890477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d" (OuterVolumeSpecName: "kube-api-access-d274d") pod "624b466e-6c64-454c-8f81-636a035d9903" (UID: "624b466e-6c64-454c-8f81-636a035d9903"). InnerVolumeSpecName "kube-api-access-d274d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.936110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config" (OuterVolumeSpecName: "config") pod "624b466e-6c64-454c-8f81-636a035d9903" (UID: "624b466e-6c64-454c-8f81-636a035d9903"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.937381 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "624b466e-6c64-454c-8f81-636a035d9903" (UID: "624b466e-6c64-454c-8f81-636a035d9903"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.959539 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "624b466e-6c64-454c-8f81-636a035d9903" (UID: "624b466e-6c64-454c-8f81-636a035d9903"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.978163 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.978426 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.978495 4865 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.978551 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d274d\" (UniqueName: \"kubernetes.io/projected/624b466e-6c64-454c-8f81-636a035d9903-kube-api-access-d274d\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:39 crc kubenswrapper[4865]: I0216 23:05:39.978614 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/624b466e-6c64-454c-8f81-636a035d9903-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.705528 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84797ccfbd-g57dm" event={"ID":"624b466e-6c64-454c-8f81-636a035d9903","Type":"ContainerDied","Data":"9993ee6b986709c7d53c29ec26adb2bf1c52141827f25a2335ad19b44d09a2c5"} Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.705863 4865 scope.go:117] "RemoveContainer" containerID="135488e7c401818eb30cf0f3365f3c5c1fcf7f94baab337cc2e52d7cd7c76d75" Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.706007 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84797ccfbd-g57dm" Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.736964 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.747881 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84797ccfbd-g57dm"] Feb 16 23:05:40 crc kubenswrapper[4865]: I0216 23:05:40.759847 4865 scope.go:117] "RemoveContainer" containerID="ebc1ca05b742248ab2abb9e3cb831650c42f8a4420b33d1fb1cb2e924737d5b9" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.482497 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5dbb7f8956-m76fk" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.482650 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.885466 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rm49l"] Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.885980 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35227f0d-ff5a-4c4c-a160-35a7743d4ca2" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.885995 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="35227f0d-ff5a-4c4c-a160-35a7743d4ca2" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886005 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e310dc7-ddd2-4a29-97a2-b071095d9966" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886013 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e310dc7-ddd2-4a29-97a2-b071095d9966" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886028 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886036 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886048 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-httpd" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886054 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-httpd" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886065 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adff2e0d-89a3-423c-8e83-a16b64c67a82" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886071 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="adff2e0d-89a3-423c-8e83-a16b64c67a82" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886083 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886088 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886105 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-api" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886110 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-api" Feb 16 23:05:41 crc kubenswrapper[4865]: E0216 23:05:41.886123 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d03aa28b-05a9-4123-a616-c1713e81c63c" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886129 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d03aa28b-05a9-4123-a616-c1713e81c63c" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886383 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="adff2e0d-89a3-423c-8e83-a16b64c67a82" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886395 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d03aa28b-05a9-4123-a616-c1713e81c63c" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886406 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-api" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886414 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="624b466e-6c64-454c-8f81-636a035d9903" containerName="neutron-httpd" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886423 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="35227f0d-ff5a-4c4c-a160-35a7743d4ca2" containerName="mariadb-account-create-update" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886431 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886439 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e310dc7-ddd2-4a29-97a2-b071095d9966" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.886453 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" containerName="mariadb-database-create" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.887344 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.889873 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9z56r" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.890302 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.893346 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 23:05:41 crc kubenswrapper[4865]: I0216 23:05:41.895521 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rm49l"] Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.019443 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.019764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.020265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.020443 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6j6\" (UniqueName: \"kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.122152 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.122228 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.122377 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.122430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6j6\" (UniqueName: \"kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.128864 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.132587 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.135897 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.141073 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6j6\" (UniqueName: \"kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6\") pod \"nova-cell0-conductor-db-sync-rm49l\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.203652 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.433311 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624b466e-6c64-454c-8f81-636a035d9903" path="/var/lib/kubelet/pods/624b466e-6c64-454c-8f81-636a035d9903/volumes" Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.680994 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rm49l"] Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.731911 4865 generic.go:334] "Generic (PLEG): container finished" podID="63a51531-7a38-4696-b1c9-806551f65cc8" containerID="44de764bcd45311b0a8abc41066494c7043b31745f2795b93071da2e20ec61f8" exitCode=0 Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.731996 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerDied","Data":"44de764bcd45311b0a8abc41066494c7043b31745f2795b93071da2e20ec61f8"} Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.733968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rm49l" event={"ID":"79d1006d-8c29-497a-8957-91fc74d71fe8","Type":"ContainerStarted","Data":"ef519baa6ad5207be3a839024a49c74e023a648e783287f15f143c73af5a53f2"} Feb 16 23:05:42 crc kubenswrapper[4865]: I0216 23:05:42.884149 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.038669 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.038840 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.038924 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.038996 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.039057 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.039108 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmn2h\" (UniqueName: \"kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.039140 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts\") pod \"63a51531-7a38-4696-b1c9-806551f65cc8\" (UID: \"63a51531-7a38-4696-b1c9-806551f65cc8\") " Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.039605 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.039629 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.044879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h" (OuterVolumeSpecName: "kube-api-access-fmn2h") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "kube-api-access-fmn2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.051062 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts" (OuterVolumeSpecName: "scripts") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.067674 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.140896 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.140928 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.140938 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51531-7a38-4696-b1c9-806551f65cc8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.140947 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmn2h\" (UniqueName: \"kubernetes.io/projected/63a51531-7a38-4696-b1c9-806551f65cc8-kube-api-access-fmn2h\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.140955 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.145785 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.160256 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data" (OuterVolumeSpecName: "config-data") pod "63a51531-7a38-4696-b1c9-806551f65cc8" (UID: "63a51531-7a38-4696-b1c9-806551f65cc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.242849 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.242908 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51531-7a38-4696-b1c9-806551f65cc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.748335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51531-7a38-4696-b1c9-806551f65cc8","Type":"ContainerDied","Data":"4fe9cc28af81fd95879a4eb17169847fc6e4274fc3fd267c5ecac902a7e96660"} Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.748765 4865 scope.go:117] "RemoveContainer" containerID="d8459476ab7cf9f5c1ba03f6584631312b40ed8053f6a11d849948f5622f1d6c" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.748436 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.779039 4865 scope.go:117] "RemoveContainer" containerID="6ff9c038a037f073a5dc34af1b8fa08bc8a400cb98a375ac4b48abe489332a59" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.788018 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.815662 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.816041 4865 scope.go:117] "RemoveContainer" containerID="d7876d96fc0ee57fcee7bcbe39a681143f2428f1cc2469e649cc5f6bb0f65a15" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.831973 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:43 crc kubenswrapper[4865]: E0216 23:05:43.832477 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-central-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832500 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-central-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: E0216 23:05:43.832516 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="proxy-httpd" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832524 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="proxy-httpd" Feb 16 23:05:43 crc kubenswrapper[4865]: E0216 23:05:43.832549 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-notification-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832558 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-notification-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: E0216 23:05:43.832570 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="sg-core" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832593 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="sg-core" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832810 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-notification-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832843 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="proxy-httpd" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832861 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="sg-core" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.832876 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" containerName="ceilometer-central-agent" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.834952 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.839873 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.841546 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.847420 4865 scope.go:117] "RemoveContainer" containerID="44de764bcd45311b0a8abc41066494c7043b31745f2795b93071da2e20ec61f8" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.848741 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.956478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.956545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6725\" (UniqueName: \"kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.956580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.956629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.956963 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.957042 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:43 crc kubenswrapper[4865]: I0216 23:05:43.957104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059032 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059152 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059181 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059313 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6725\" (UniqueName: \"kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.059358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.060201 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.060225 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.064586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.064922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.065091 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.070416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.087905 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6725\" (UniqueName: \"kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725\") pod \"ceilometer-0\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.155647 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.427512 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a51531-7a38-4696-b1c9-806551f65cc8" path="/var/lib/kubelet/pods/63a51531-7a38-4696-b1c9-806551f65cc8/volumes" Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.587575 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:44 crc kubenswrapper[4865]: W0216 23:05:44.593238 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa352076_d419_40e7_9307_8e317a75e879.slice/crio-25c1164d7153313055c570436d1f5c8829f25800db6b19a46b0b7e3faa324878 WatchSource:0}: Error finding container 25c1164d7153313055c570436d1f5c8829f25800db6b19a46b0b7e3faa324878: Status 404 returned error can't find the container with id 25c1164d7153313055c570436d1f5c8829f25800db6b19a46b0b7e3faa324878 Feb 16 23:05:44 crc kubenswrapper[4865]: I0216 23:05:44.760091 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerStarted","Data":"25c1164d7153313055c570436d1f5c8829f25800db6b19a46b0b7e3faa324878"} Feb 16 23:05:45 crc kubenswrapper[4865]: I0216 23:05:45.772085 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerStarted","Data":"2f056974bab45a7c87c2eb85f8e6af8a4f35d7df670b1b9a39c9c84b40b3ecac"} Feb 16 23:05:46 crc kubenswrapper[4865]: I0216 23:05:46.384611 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 23:05:47 crc kubenswrapper[4865]: I0216 23:05:47.794942 4865 generic.go:334] "Generic (PLEG): container finished" podID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerID="5fc273bbb3c9c3f7431d58f42273042ec637806a09beb179e3b7c7fb8231c767" exitCode=137 Feb 16 23:05:47 crc kubenswrapper[4865]: I0216 23:05:47.795010 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerDied","Data":"5fc273bbb3c9c3f7431d58f42273042ec637806a09beb179e3b7c7fb8231c767"} Feb 16 23:05:48 crc kubenswrapper[4865]: I0216 23:05:48.942365 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.724956 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxpwf\" (UniqueName: \"kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845331 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845477 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845526 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.845554 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key\") pod \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\" (UID: \"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3\") " Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.846392 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs" (OuterVolumeSpecName: "logs") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.860571 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dbb7f8956-m76fk" event={"ID":"cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3","Type":"ContainerDied","Data":"a4a9dab95832c674ae1c91886d10a39340070e56b6f0df0a3e9c6f066169d932"} Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.860622 4865 scope.go:117] "RemoveContainer" containerID="fe4078ad96c214d8c6f52173dba97bd712dac7fd5f98c905b528ed5de0c2e126" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.861076 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dbb7f8956-m76fk" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.887545 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf" (OuterVolumeSpecName: "kube-api-access-jxpwf") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "kube-api-access-jxpwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.889416 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.923474 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data" (OuterVolumeSpecName: "config-data") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.936605 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts" (OuterVolumeSpecName: "scripts") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.948457 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxpwf\" (UniqueName: \"kubernetes.io/projected/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-kube-api-access-jxpwf\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.948491 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.948501 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.948512 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:50 crc kubenswrapper[4865]: I0216 23:05:50.948522 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.022362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.039071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" (UID: "cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.050597 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.050627 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.097856 4865 scope.go:117] "RemoveContainer" containerID="5fc273bbb3c9c3f7431d58f42273042ec637806a09beb179e3b7c7fb8231c767" Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.200728 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.208338 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dbb7f8956-m76fk"] Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.876410 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rm49l" event={"ID":"79d1006d-8c29-497a-8957-91fc74d71fe8","Type":"ContainerStarted","Data":"dc5bb7e8a6208a9d466a5c01af7b26986329345e244aaac034641eef183e7a69"} Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.878921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerStarted","Data":"95032640ce2c0dbf1f0edfc4ceed5d3cf5b08295789534869529119a0caf6a4c"} Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.878959 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerStarted","Data":"1318fb1f9634dd95b46b163edbd29cb21139c49760b272f89c7351a7a266c5a4"} Feb 16 23:05:51 crc kubenswrapper[4865]: I0216 23:05:51.895675 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rm49l" podStartSLOduration=2.966528165 podStartE2EDuration="10.895658143s" podCreationTimestamp="2026-02-16 23:05:41 +0000 UTC" firstStartedPulling="2026-02-16 23:05:42.682009641 +0000 UTC m=+1183.005716602" lastFinishedPulling="2026-02-16 23:05:50.611139619 +0000 UTC m=+1190.934846580" observedRunningTime="2026-02-16 23:05:51.890870717 +0000 UTC m=+1192.214577678" watchObservedRunningTime="2026-02-16 23:05:51.895658143 +0000 UTC m=+1192.219365104" Feb 16 23:05:52 crc kubenswrapper[4865]: I0216 23:05:52.449800 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" path="/var/lib/kubelet/pods/cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3/volumes" Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.902424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerStarted","Data":"4a6cea5bdae209ec96a59c43718f6e0b94fb2c32a65de7b7ad111e75d1910101"} Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.903009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.902719 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="proxy-httpd" containerID="cri-o://4a6cea5bdae209ec96a59c43718f6e0b94fb2c32a65de7b7ad111e75d1910101" gracePeriod=30 Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.902728 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="sg-core" containerID="cri-o://95032640ce2c0dbf1f0edfc4ceed5d3cf5b08295789534869529119a0caf6a4c" gracePeriod=30 Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.902763 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-notification-agent" containerID="cri-o://1318fb1f9634dd95b46b163edbd29cb21139c49760b272f89c7351a7a266c5a4" gracePeriod=30 Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.902919 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-central-agent" containerID="cri-o://2f056974bab45a7c87c2eb85f8e6af8a4f35d7df670b1b9a39c9c84b40b3ecac" gracePeriod=30 Feb 16 23:05:53 crc kubenswrapper[4865]: I0216 23:05:53.938361 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.28071315 podStartE2EDuration="10.93833513s" podCreationTimestamp="2026-02-16 23:05:43 +0000 UTC" firstStartedPulling="2026-02-16 23:05:44.596209829 +0000 UTC m=+1184.919916790" lastFinishedPulling="2026-02-16 23:05:53.253831779 +0000 UTC m=+1193.577538770" observedRunningTime="2026-02-16 23:05:53.928645345 +0000 UTC m=+1194.252352306" watchObservedRunningTime="2026-02-16 23:05:53.93833513 +0000 UTC m=+1194.262042091" Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.087343 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.087824 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-log" containerID="cri-o://86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f" gracePeriod=30 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.087905 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-httpd" containerID="cri-o://905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e" gracePeriod=30 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914008 4865 generic.go:334] "Generic (PLEG): container finished" podID="aa352076-d419-40e7-9307-8e317a75e879" containerID="4a6cea5bdae209ec96a59c43718f6e0b94fb2c32a65de7b7ad111e75d1910101" exitCode=0 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914306 4865 generic.go:334] "Generic (PLEG): container finished" podID="aa352076-d419-40e7-9307-8e317a75e879" containerID="95032640ce2c0dbf1f0edfc4ceed5d3cf5b08295789534869529119a0caf6a4c" exitCode=2 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914315 4865 generic.go:334] "Generic (PLEG): container finished" podID="aa352076-d419-40e7-9307-8e317a75e879" containerID="1318fb1f9634dd95b46b163edbd29cb21139c49760b272f89c7351a7a266c5a4" exitCode=0 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914077 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerDied","Data":"4a6cea5bdae209ec96a59c43718f6e0b94fb2c32a65de7b7ad111e75d1910101"} Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914380 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerDied","Data":"95032640ce2c0dbf1f0edfc4ceed5d3cf5b08295789534869529119a0caf6a4c"} Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.914395 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerDied","Data":"1318fb1f9634dd95b46b163edbd29cb21139c49760b272f89c7351a7a266c5a4"} Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.916901 4865 generic.go:334] "Generic (PLEG): container finished" podID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerID="86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f" exitCode=143 Feb 16 23:05:54 crc kubenswrapper[4865]: I0216 23:05:54.916955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerDied","Data":"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f"} Feb 16 23:05:55 crc kubenswrapper[4865]: I0216 23:05:55.156540 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:05:55 crc kubenswrapper[4865]: I0216 23:05:55.156840 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-log" containerID="cri-o://f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39" gracePeriod=30 Feb 16 23:05:55 crc kubenswrapper[4865]: I0216 23:05:55.157166 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-httpd" containerID="cri-o://c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd" gracePeriod=30 Feb 16 23:05:55 crc kubenswrapper[4865]: I0216 23:05:55.925916 4865 generic.go:334] "Generic (PLEG): container finished" podID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerID="f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39" exitCode=143 Feb 16 23:05:55 crc kubenswrapper[4865]: I0216 23:05:55.925961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerDied","Data":"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39"} Feb 16 23:05:56 crc kubenswrapper[4865]: I0216 23:05:56.937171 4865 generic.go:334] "Generic (PLEG): container finished" podID="aa352076-d419-40e7-9307-8e317a75e879" containerID="2f056974bab45a7c87c2eb85f8e6af8a4f35d7df670b1b9a39c9c84b40b3ecac" exitCode=0 Feb 16 23:05:56 crc kubenswrapper[4865]: I0216 23:05:56.937572 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerDied","Data":"2f056974bab45a7c87c2eb85f8e6af8a4f35d7df670b1b9a39c9c84b40b3ecac"} Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.050571 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.067818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.067889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.067976 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.068031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6725\" (UniqueName: \"kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.068133 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.068159 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.068217 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml\") pod \"aa352076-d419-40e7-9307-8e317a75e879\" (UID: \"aa352076-d419-40e7-9307-8e317a75e879\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.069602 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.069657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.083674 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725" (OuterVolumeSpecName: "kube-api-access-k6725") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "kube-api-access-k6725". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.092752 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts" (OuterVolumeSpecName: "scripts") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.116653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.170826 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.170854 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.170866 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.170874 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa352076-d419-40e7-9307-8e317a75e879-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.170883 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6725\" (UniqueName: \"kubernetes.io/projected/aa352076-d419-40e7-9307-8e317a75e879-kube-api-access-k6725\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.194030 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.201914 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data" (OuterVolumeSpecName: "config-data") pod "aa352076-d419-40e7-9307-8e317a75e879" (UID: "aa352076-d419-40e7-9307-8e317a75e879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.273341 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.273390 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa352076-d419-40e7-9307-8e317a75e879-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.811828 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884266 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbs5z\" (UniqueName: \"kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884504 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884578 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884671 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.884815 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run\") pod \"b47a4cd1-17e5-45eb-b964-9503d50f7089\" (UID: \"b47a4cd1-17e5-45eb-b964-9503d50f7089\") " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.885912 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs" (OuterVolumeSpecName: "logs") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.886756 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.893999 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z" (OuterVolumeSpecName: "kube-api-access-pbs5z") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "kube-api-access-pbs5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.894247 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.901413 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts" (OuterVolumeSpecName: "scripts") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.943808 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.954474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa352076-d419-40e7-9307-8e317a75e879","Type":"ContainerDied","Data":"25c1164d7153313055c570436d1f5c8829f25800db6b19a46b0b7e3faa324878"} Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.954505 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.954543 4865 scope.go:117] "RemoveContainer" containerID="4a6cea5bdae209ec96a59c43718f6e0b94fb2c32a65de7b7ad111e75d1910101" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.956075 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data" (OuterVolumeSpecName: "config-data") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.959719 4865 generic.go:334] "Generic (PLEG): container finished" podID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerID="905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e" exitCode=0 Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.959756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerDied","Data":"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e"} Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.959824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b47a4cd1-17e5-45eb-b964-9503d50f7089","Type":"ContainerDied","Data":"cf65017e9bfc48772da0e56ae1e2b15f6cbb1e7fa330ac12be3e7dd7acde772e"} Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.959866 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988377 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988401 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988411 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbs5z\" (UniqueName: \"kubernetes.io/projected/b47a4cd1-17e5-45eb-b964-9503d50f7089-kube-api-access-pbs5z\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988422 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988433 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b47a4cd1-17e5-45eb-b964-9503d50f7089-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988460 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 16 23:05:57 crc kubenswrapper[4865]: I0216 23:05:57.988469 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.012625 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.015638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b47a4cd1-17e5-45eb-b964-9503d50f7089" (UID: "b47a4cd1-17e5-45eb-b964-9503d50f7089"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.015938 4865 scope.go:117] "RemoveContainer" containerID="95032640ce2c0dbf1f0edfc4ceed5d3cf5b08295789534869529119a0caf6a4c" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.030394 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.043728 4865 scope.go:117] "RemoveContainer" containerID="1318fb1f9634dd95b46b163edbd29cb21139c49760b272f89c7351a7a266c5a4" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.044505 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.080847 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081390 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-central-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081413 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-central-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081427 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081437 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081461 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="sg-core" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081469 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="sg-core" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081486 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon-log" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081495 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon-log" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081523 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="proxy-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081532 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="proxy-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081546 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-log" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081555 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-log" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081565 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081573 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.081585 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-notification-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.081595 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-notification-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082040 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-notification-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082062 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="sg-core" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082073 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082086 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-log" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082094 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdeef9cc-7c39-4f7c-a1f4-ab9187697fa3" containerName="horizon-log" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082111 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="proxy-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082133 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" containerName="glance-httpd" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.082148 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa352076-d419-40e7-9307-8e317a75e879" containerName="ceilometer-central-agent" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.084909 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.085418 4865 scope.go:117] "RemoveContainer" containerID="2f056974bab45a7c87c2eb85f8e6af8a4f35d7df670b1b9a39c9c84b40b3ecac" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.087805 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.087994 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090017 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftcr\" (UniqueName: \"kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090128 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090149 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090195 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090450 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090646 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.090666 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b47a4cd1-17e5-45eb-b964-9503d50f7089-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.095621 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.115118 4865 scope.go:117] "RemoveContainer" containerID="905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.141201 4865 scope.go:117] "RemoveContainer" containerID="86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.160364 4865 scope.go:117] "RemoveContainer" containerID="905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.160887 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e\": container with ID starting with 905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e not found: ID does not exist" containerID="905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.160937 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e"} err="failed to get container status \"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e\": rpc error: code = NotFound desc = could not find container \"905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e\": container with ID starting with 905f21f21331466d23c4ae34ea7635326aedf41d22911a5fae838bb014c1097e not found: ID does not exist" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.160975 4865 scope.go:117] "RemoveContainer" containerID="86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f" Feb 16 23:05:58 crc kubenswrapper[4865]: E0216 23:05:58.161595 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f\": container with ID starting with 86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f not found: ID does not exist" containerID="86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.161619 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f"} err="failed to get container status \"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f\": rpc error: code = NotFound desc = could not find container \"86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f\": container with ID starting with 86b8821ea672c3a2e52e20af623d56b5f97f0fb05784431ffec0cf8d144dab7f not found: ID does not exist" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.191953 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192004 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192161 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192193 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192224 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftcr\" (UniqueName: \"kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.192683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.193112 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.195812 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.196632 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.196654 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.200365 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.209598 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftcr\" (UniqueName: \"kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr\") pod \"ceilometer-0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.302903 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.326871 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.340998 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.342618 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.347114 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.347495 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.349373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.406772 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.432992 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa352076-d419-40e7-9307-8e317a75e879" path="/var/lib/kubelet/pods/aa352076-d419-40e7-9307-8e317a75e879/volumes" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.434459 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47a4cd1-17e5-45eb-b964-9503d50f7089" path="/var/lib/kubelet/pods/b47a4cd1-17e5-45eb-b964-9503d50f7089/volumes" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497299 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-scripts\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497426 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497454 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbvqr\" (UniqueName: \"kubernetes.io/projected/443145e3-8ed2-4863-bde1-9b932b22ef00-kube-api-access-sbvqr\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497539 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-config-data\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497596 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-logs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.497631 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.598923 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-logs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.598982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599077 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-scripts\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599113 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599136 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599168 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbvqr\" (UniqueName: \"kubernetes.io/projected/443145e3-8ed2-4863-bde1-9b932b22ef00-kube-api-access-sbvqr\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.599206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-config-data\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.600535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-logs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.603923 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/443145e3-8ed2-4863-bde1-9b932b22ef00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.604405 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.610313 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-scripts\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.612580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-config-data\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.615783 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.629834 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbvqr\" (UniqueName: \"kubernetes.io/projected/443145e3-8ed2-4863-bde1-9b932b22ef00-kube-api-access-sbvqr\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.651742 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443145e3-8ed2-4863-bde1-9b932b22ef00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.686290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"443145e3-8ed2-4863-bde1-9b932b22ef00\") " pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.722807 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.925916 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.965556 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.979811 4865 generic.go:334] "Generic (PLEG): container finished" podID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerID="c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd" exitCode=0 Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.980070 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerDied","Data":"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd"} Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.980098 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7ce244fb-655c-4941-9ed9-1b2ceddd74d9","Type":"ContainerDied","Data":"e2196354b4e6e6f16642000e6d9319ce47369e316921792089075c1e3a3e9664"} Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.980115 4865 scope.go:117] "RemoveContainer" containerID="c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.980221 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:58 crc kubenswrapper[4865]: I0216 23:05:58.984431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerStarted","Data":"223c216fcaea908c2c50657c6bd851d46c4f0a645e5e325fa72da2ad200ecaef"} Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.019527 4865 scope.go:117] "RemoveContainer" containerID="f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.045918 4865 scope.go:117] "RemoveContainer" containerID="c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd" Feb 16 23:05:59 crc kubenswrapper[4865]: E0216 23:05:59.046576 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd\": container with ID starting with c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd not found: ID does not exist" containerID="c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.046619 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd"} err="failed to get container status \"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd\": rpc error: code = NotFound desc = could not find container \"c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd\": container with ID starting with c689d158a4efd34829e13feae133efec6eb0bf9d8d42da2271d5bd0e8b5628dd not found: ID does not exist" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.046639 4865 scope.go:117] "RemoveContainer" containerID="f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39" Feb 16 23:05:59 crc kubenswrapper[4865]: E0216 23:05:59.047101 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39\": container with ID starting with f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39 not found: ID does not exist" containerID="f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.047121 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39"} err="failed to get container status \"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39\": rpc error: code = NotFound desc = could not find container \"f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39\": container with ID starting with f3b55ab017f208f2208e75831d1b45f5f75b0da5303275f898b1c316b523ea39 not found: ID does not exist" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.109635 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.109689 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.109711 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.109736 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.109799 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2w2f\" (UniqueName: \"kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.110337 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.110485 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.110518 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.110584 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\" (UID: \"7ce244fb-655c-4941-9ed9-1b2ceddd74d9\") " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.111004 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs" (OuterVolumeSpecName: "logs") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.111070 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.114952 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.115473 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts" (OuterVolumeSpecName: "scripts") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.115791 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f" (OuterVolumeSpecName: "kube-api-access-t2w2f") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "kube-api-access-t2w2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.143537 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.166820 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data" (OuterVolumeSpecName: "config-data") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.175131 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7ce244fb-655c-4941-9ed9-1b2ceddd74d9" (UID: "7ce244fb-655c-4941-9ed9-1b2ceddd74d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212713 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212755 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212773 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212788 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2w2f\" (UniqueName: \"kubernetes.io/projected/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-kube-api-access-t2w2f\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212798 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212807 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce244fb-655c-4941-9ed9-1b2ceddd74d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.212841 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.235009 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.314221 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:05:59 crc kubenswrapper[4865]: W0216 23:05:59.322520 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod443145e3_8ed2_4863_bde1_9b932b22ef00.slice/crio-f901cf4e4252e20315d11b19b4bc8436803fc77abf8159b7da6383501786ed3d WatchSource:0}: Error finding container f901cf4e4252e20315d11b19b4bc8436803fc77abf8159b7da6383501786ed3d: Status 404 returned error can't find the container with id f901cf4e4252e20315d11b19b4bc8436803fc77abf8159b7da6383501786ed3d Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.327120 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.337782 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.347494 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.378606 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:05:59 crc kubenswrapper[4865]: E0216 23:05:59.379097 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-httpd" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.379118 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-httpd" Feb 16 23:05:59 crc kubenswrapper[4865]: E0216 23:05:59.379159 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-log" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.379168 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-log" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.379445 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-httpd" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.379474 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" containerName="glance-log" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.380652 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.384204 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.384402 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.397353 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.516943 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517292 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517353 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517419 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517761 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517823 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.517994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsh48\" (UniqueName: \"kubernetes.io/projected/60070db1-5a47-4b70-b318-46f3745677c5-kube-api-access-zsh48\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619507 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsh48\" (UniqueName: \"kubernetes.io/projected/60070db1-5a47-4b70-b318-46f3745677c5-kube-api-access-zsh48\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.619985 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.620330 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60070db1-5a47-4b70-b318-46f3745677c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.620329 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.624616 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.625604 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.625678 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.626815 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60070db1-5a47-4b70-b318-46f3745677c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.648115 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsh48\" (UniqueName: \"kubernetes.io/projected/60070db1-5a47-4b70-b318-46f3745677c5-kube-api-access-zsh48\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.650534 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"60070db1-5a47-4b70-b318-46f3745677c5\") " pod="openstack/glance-default-internal-api-0" Feb 16 23:05:59 crc kubenswrapper[4865]: I0216 23:05:59.714451 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:00 crc kubenswrapper[4865]: I0216 23:06:00.014679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"443145e3-8ed2-4863-bde1-9b932b22ef00","Type":"ContainerStarted","Data":"f901cf4e4252e20315d11b19b4bc8436803fc77abf8159b7da6383501786ed3d"} Feb 16 23:06:00 crc kubenswrapper[4865]: I0216 23:06:00.263017 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 23:06:00 crc kubenswrapper[4865]: I0216 23:06:00.427734 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce244fb-655c-4941-9ed9-1b2ceddd74d9" path="/var/lib/kubelet/pods/7ce244fb-655c-4941-9ed9-1b2ceddd74d9/volumes" Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.038960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerStarted","Data":"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.039638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerStarted","Data":"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.042899 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"443145e3-8ed2-4863-bde1-9b932b22ef00","Type":"ContainerStarted","Data":"340e4e9e5e191889fbaef94dff1f075f44a1be1258e0b73280c7f631c394b66e"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.042942 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"443145e3-8ed2-4863-bde1-9b932b22ef00","Type":"ContainerStarted","Data":"2adabb1d564d6c8a5d74ff94fe8ba537c7fb162b75e8493843da6810c520ee05"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.046208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60070db1-5a47-4b70-b318-46f3745677c5","Type":"ContainerStarted","Data":"592b3c243392a50ef52fb8f0dd68c491ffd4f365aac7c03f6511e11537851e61"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.046268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60070db1-5a47-4b70-b318-46f3745677c5","Type":"ContainerStarted","Data":"3fae41541da6ea2dcbac078d16a08d74589c75c1282ff02bed9ab35436620d94"} Feb 16 23:06:01 crc kubenswrapper[4865]: I0216 23:06:01.076176 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.076159514 podStartE2EDuration="3.076159514s" podCreationTimestamp="2026-02-16 23:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:01.059924973 +0000 UTC m=+1201.383631934" watchObservedRunningTime="2026-02-16 23:06:01.076159514 +0000 UTC m=+1201.399866475" Feb 16 23:06:02 crc kubenswrapper[4865]: I0216 23:06:02.089049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"60070db1-5a47-4b70-b318-46f3745677c5","Type":"ContainerStarted","Data":"a774657f518a16b2cded6798538e7925ca1f43820d165be1b2f52bcdef46cdb1"} Feb 16 23:06:02 crc kubenswrapper[4865]: I0216 23:06:02.122144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerStarted","Data":"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e"} Feb 16 23:06:02 crc kubenswrapper[4865]: I0216 23:06:02.130732 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.130711017 podStartE2EDuration="3.130711017s" podCreationTimestamp="2026-02-16 23:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:02.120881448 +0000 UTC m=+1202.444588409" watchObservedRunningTime="2026-02-16 23:06:02.130711017 +0000 UTC m=+1202.454417978" Feb 16 23:06:03 crc kubenswrapper[4865]: I0216 23:06:03.134323 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerStarted","Data":"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658"} Feb 16 23:06:03 crc kubenswrapper[4865]: I0216 23:06:03.177429 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.759522638 podStartE2EDuration="5.177399586s" podCreationTimestamp="2026-02-16 23:05:58 +0000 UTC" firstStartedPulling="2026-02-16 23:05:58.953757292 +0000 UTC m=+1199.277464253" lastFinishedPulling="2026-02-16 23:06:02.37163424 +0000 UTC m=+1202.695341201" observedRunningTime="2026-02-16 23:06:03.167436573 +0000 UTC m=+1203.491143534" watchObservedRunningTime="2026-02-16 23:06:03.177399586 +0000 UTC m=+1203.501106567" Feb 16 23:06:04 crc kubenswrapper[4865]: I0216 23:06:04.150201 4865 generic.go:334] "Generic (PLEG): container finished" podID="79d1006d-8c29-497a-8957-91fc74d71fe8" containerID="dc5bb7e8a6208a9d466a5c01af7b26986329345e244aaac034641eef183e7a69" exitCode=0 Feb 16 23:06:04 crc kubenswrapper[4865]: I0216 23:06:04.150340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rm49l" event={"ID":"79d1006d-8c29-497a-8957-91fc74d71fe8","Type":"ContainerDied","Data":"dc5bb7e8a6208a9d466a5c01af7b26986329345e244aaac034641eef183e7a69"} Feb 16 23:06:04 crc kubenswrapper[4865]: I0216 23:06:04.150748 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.621856 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.643604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data\") pod \"79d1006d-8c29-497a-8957-91fc74d71fe8\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.643693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts\") pod \"79d1006d-8c29-497a-8957-91fc74d71fe8\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.643825 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle\") pod \"79d1006d-8c29-497a-8957-91fc74d71fe8\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.643881 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q6j6\" (UniqueName: \"kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6\") pod \"79d1006d-8c29-497a-8957-91fc74d71fe8\" (UID: \"79d1006d-8c29-497a-8957-91fc74d71fe8\") " Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.691737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts" (OuterVolumeSpecName: "scripts") pod "79d1006d-8c29-497a-8957-91fc74d71fe8" (UID: "79d1006d-8c29-497a-8957-91fc74d71fe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.691789 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6" (OuterVolumeSpecName: "kube-api-access-4q6j6") pod "79d1006d-8c29-497a-8957-91fc74d71fe8" (UID: "79d1006d-8c29-497a-8957-91fc74d71fe8"). InnerVolumeSpecName "kube-api-access-4q6j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.697546 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data" (OuterVolumeSpecName: "config-data") pod "79d1006d-8c29-497a-8957-91fc74d71fe8" (UID: "79d1006d-8c29-497a-8957-91fc74d71fe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.699883 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d1006d-8c29-497a-8957-91fc74d71fe8" (UID: "79d1006d-8c29-497a-8957-91fc74d71fe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.746491 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.746758 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q6j6\" (UniqueName: \"kubernetes.io/projected/79d1006d-8c29-497a-8957-91fc74d71fe8-kube-api-access-4q6j6\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.746837 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:05 crc kubenswrapper[4865]: I0216 23:06:05.746909 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79d1006d-8c29-497a-8957-91fc74d71fe8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.172852 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rm49l" event={"ID":"79d1006d-8c29-497a-8957-91fc74d71fe8","Type":"ContainerDied","Data":"ef519baa6ad5207be3a839024a49c74e023a648e783287f15f143c73af5a53f2"} Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.172908 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef519baa6ad5207be3a839024a49c74e023a648e783287f15f143c73af5a53f2" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.172932 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rm49l" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.306311 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 23:06:06 crc kubenswrapper[4865]: E0216 23:06:06.307110 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d1006d-8c29-497a-8957-91fc74d71fe8" containerName="nova-cell0-conductor-db-sync" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.307133 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d1006d-8c29-497a-8957-91fc74d71fe8" containerName="nova-cell0-conductor-db-sync" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.307352 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d1006d-8c29-497a-8957-91fc74d71fe8" containerName="nova-cell0-conductor-db-sync" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.308208 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.310762 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9z56r" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.310917 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.360332 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.361530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwt5\" (UniqueName: \"kubernetes.io/projected/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-kube-api-access-4wwt5\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.361607 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.362001 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.464356 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.464460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwt5\" (UniqueName: \"kubernetes.io/projected/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-kube-api-access-4wwt5\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.464485 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.472301 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.473243 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.482192 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwt5\" (UniqueName: \"kubernetes.io/projected/cf4d46ef-86f3-450e-9b46-a0ee9085e51d-kube-api-access-4wwt5\") pod \"nova-cell0-conductor-0\" (UID: \"cf4d46ef-86f3-450e-9b46-a0ee9085e51d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:06 crc kubenswrapper[4865]: I0216 23:06:06.627258 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:07 crc kubenswrapper[4865]: I0216 23:06:07.137402 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 23:06:07 crc kubenswrapper[4865]: W0216 23:06:07.143571 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4d46ef_86f3_450e_9b46_a0ee9085e51d.slice/crio-aae43bb195742d1f64d2b5579f494a41f2fde47459c03643787c37d2d50c90bf WatchSource:0}: Error finding container aae43bb195742d1f64d2b5579f494a41f2fde47459c03643787c37d2d50c90bf: Status 404 returned error can't find the container with id aae43bb195742d1f64d2b5579f494a41f2fde47459c03643787c37d2d50c90bf Feb 16 23:06:07 crc kubenswrapper[4865]: I0216 23:06:07.196166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf4d46ef-86f3-450e-9b46-a0ee9085e51d","Type":"ContainerStarted","Data":"aae43bb195742d1f64d2b5579f494a41f2fde47459c03643787c37d2d50c90bf"} Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.212125 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf4d46ef-86f3-450e-9b46-a0ee9085e51d","Type":"ContainerStarted","Data":"5aa242c1049060dc3f2bfdccfd6eb021522b13ec0a89182a1dbc9fdcc4800b57"} Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.214509 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.242868 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.242842198 podStartE2EDuration="2.242842198s" podCreationTimestamp="2026-02-16 23:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:08.236310382 +0000 UTC m=+1208.560017383" watchObservedRunningTime="2026-02-16 23:06:08.242842198 +0000 UTC m=+1208.566549189" Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.723643 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.724078 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.772737 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 23:06:08 crc kubenswrapper[4865]: I0216 23:06:08.779447 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.221868 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.222176 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.715122 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.716556 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.771823 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:09 crc kubenswrapper[4865]: I0216 23:06:09.775998 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:10 crc kubenswrapper[4865]: I0216 23:06:10.242109 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:10 crc kubenswrapper[4865]: I0216 23:06:10.242455 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:10 crc kubenswrapper[4865]: I0216 23:06:10.968420 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 23:06:11 crc kubenswrapper[4865]: I0216 23:06:11.202118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 23:06:12 crc kubenswrapper[4865]: I0216 23:06:12.184192 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:12 crc kubenswrapper[4865]: I0216 23:06:12.261644 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 23:06:12 crc kubenswrapper[4865]: I0216 23:06:12.271064 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 23:06:16 crc kubenswrapper[4865]: I0216 23:06:16.678115 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.251115 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jzjtf"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.252604 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.259266 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.262161 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.272201 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jzjtf"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.298197 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.298253 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.298302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.298388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhk4\" (UniqueName: \"kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.400815 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.400899 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.400951 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.400983 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfhk4\" (UniqueName: \"kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.409663 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.412762 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.433543 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.438451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfhk4\" (UniqueName: \"kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4\") pod \"nova-cell0-cell-mapping-jzjtf\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.545435 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.546887 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.550666 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.559733 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.580263 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.603918 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.603978 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcxc\" (UniqueName: \"kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.604045 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.616364 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.617987 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.621613 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.627926 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.648310 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.657976 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.663680 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.698362 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709676 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709803 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcxc\" (UniqueName: \"kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709821 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpbg\" (UniqueName: \"kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709907 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.709990 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lzs\" (UniqueName: \"kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.722871 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.724646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.748027 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcxc\" (UniqueName: \"kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc\") pod \"nova-scheduler-0\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.758710 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.760255 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.767217 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.768063 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.812648 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813015 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813049 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813080 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpbg\" (UniqueName: \"kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813204 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2jr\" (UniqueName: \"kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813272 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.813318 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.816568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.816620 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lzs\" (UniqueName: \"kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.836036 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.866487 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.867076 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.868107 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.869474 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.870494 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.871484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lzs\" (UniqueName: \"kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.874425 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpbg\" (UniqueName: \"kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg\") pod \"nova-api-0\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " pod="openstack/nova-api-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.890834 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.892872 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.892985 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921647 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921731 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921775 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921809 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921898 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921949 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921976 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2jr\" (UniqueName: \"kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.921995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mft\" (UniqueName: \"kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.922024 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.922567 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.940823 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.941579 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:17 crc kubenswrapper[4865]: I0216 23:06:17.944685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2jr\" (UniqueName: \"kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr\") pod \"nova-metadata-0\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " pod="openstack/nova-metadata-0" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024619 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024659 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mft\" (UniqueName: \"kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.024783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.025596 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.025613 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.027098 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.027682 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.027884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.054090 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mft\" (UniqueName: \"kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft\") pod \"dnsmasq-dns-845d6d6f59-fztgd\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.071744 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.089124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.129751 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.214345 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.260330 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jzjtf"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.364239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jzjtf" event={"ID":"e5a7bbbd-389f-49cb-b8b4-4a54280d034a","Type":"ContainerStarted","Data":"a2853d6bd6a2d72f04145cff394266f61937dda1e4a4489e621f07944349b9de"} Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.444373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.490244 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7lw7w"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.491577 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.494270 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.494476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.507411 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7lw7w"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.539741 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.539817 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.539876 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.539956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsjq\" (UniqueName: \"kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.641777 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.641865 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.641956 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsjq\" (UniqueName: \"kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.641999 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.647912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.649892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.650832 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.659700 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsjq\" (UniqueName: \"kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq\") pod \"nova-cell1-conductor-db-sync-7lw7w\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.717795 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:18 crc kubenswrapper[4865]: W0216 23:06:18.719406 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1aec611_46b0_46be_b951_fba1e8d2f282.slice/crio-b447c91230d164371381b42b92f9967786503db19f510fb7a5582fb27fc23a9d WatchSource:0}: Error finding container b447c91230d164371381b42b92f9967786503db19f510fb7a5582fb27fc23a9d: Status 404 returned error can't find the container with id b447c91230d164371381b42b92f9967786503db19f510fb7a5582fb27fc23a9d Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.827024 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.843578 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.882310 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:18 crc kubenswrapper[4865]: I0216 23:06:18.950566 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:06:18 crc kubenswrapper[4865]: W0216 23:06:18.955609 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20c47fac_7072_47a1_a396_fdcc07153dc1.slice/crio-ebe026f226050804e1b5c325268e67d7629d0b7903b955465a16a3412cf9e5fa WatchSource:0}: Error finding container ebe026f226050804e1b5c325268e67d7629d0b7903b955465a16a3412cf9e5fa: Status 404 returned error can't find the container with id ebe026f226050804e1b5c325268e67d7629d0b7903b955465a16a3412cf9e5fa Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.376473 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerStarted","Data":"c28050e55929fa2ef807282db72349ec3ccfdfb0a2c5504ff3fbca8311012914"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.378846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1aec611-46b0-46be-b951-fba1e8d2f282","Type":"ContainerStarted","Data":"b447c91230d164371381b42b92f9967786503db19f510fb7a5582fb27fc23a9d"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.381095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerStarted","Data":"8629c457b5e8825e8f1f5599702495c04eed425c6e20b756b90bdd5c5c3e13bd"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.382396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jzjtf" event={"ID":"e5a7bbbd-389f-49cb-b8b4-4a54280d034a","Type":"ContainerStarted","Data":"cc8be8ea2f4e8243c48e16f613b1c44ac6bb1df9db61237c737caeabf79e098c"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.392064 4865 generic.go:334] "Generic (PLEG): container finished" podID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerID="309cf82e9581227a93a6dee8bb7425af8fe029ee5e40daa91438ad541a4145d3" exitCode=0 Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.392141 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" event={"ID":"20c47fac-7072-47a1-a396-fdcc07153dc1","Type":"ContainerDied","Data":"309cf82e9581227a93a6dee8bb7425af8fe029ee5e40daa91438ad541a4145d3"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.392172 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" event={"ID":"20c47fac-7072-47a1-a396-fdcc07153dc1","Type":"ContainerStarted","Data":"ebe026f226050804e1b5c325268e67d7629d0b7903b955465a16a3412cf9e5fa"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.395832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1b0d02-1292-4b66-a0ae-28273fcf65c8","Type":"ContainerStarted","Data":"b7773c3c14b7344dd21ba3bd6cecadafe5485f58b46aa7fe0cb575c44af14313"} Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.454606 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jzjtf" podStartSLOduration=2.454580541 podStartE2EDuration="2.454580541s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:19.402895363 +0000 UTC m=+1219.726602324" watchObservedRunningTime="2026-02-16 23:06:19.454580541 +0000 UTC m=+1219.778287502" Feb 16 23:06:19 crc kubenswrapper[4865]: I0216 23:06:19.486527 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7lw7w"] Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.405722 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" event={"ID":"20c47fac-7072-47a1-a396-fdcc07153dc1","Type":"ContainerStarted","Data":"645283c2e8e429e86a4407eb0e60cb4f27c931fe3a0a64d79266d0c4d79d74d3"} Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.406053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.410751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" event={"ID":"ee0920ef-18c3-4e01-b206-08b31472078a","Type":"ContainerStarted","Data":"a95aea6349f0ab21688e30bd9d52c833a8199feaad2386a6f4b0f78c830877a6"} Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.410790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" event={"ID":"ee0920ef-18c3-4e01-b206-08b31472078a","Type":"ContainerStarted","Data":"5708b0fb196e8422ad31fa381cf2c7fa1f8bbea4ef409b74b1a2c9d273c7d18b"} Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.436725 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" podStartSLOduration=3.436705066 podStartE2EDuration="3.436705066s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:20.435714888 +0000 UTC m=+1220.759421839" watchObservedRunningTime="2026-02-16 23:06:20.436705066 +0000 UTC m=+1220.760412017" Feb 16 23:06:20 crc kubenswrapper[4865]: I0216 23:06:20.466057 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" podStartSLOduration=2.46603741 podStartE2EDuration="2.46603741s" podCreationTimestamp="2026-02-16 23:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:20.460340198 +0000 UTC m=+1220.784047159" watchObservedRunningTime="2026-02-16 23:06:20.46603741 +0000 UTC m=+1220.789744361" Feb 16 23:06:21 crc kubenswrapper[4865]: I0216 23:06:21.041856 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:21 crc kubenswrapper[4865]: I0216 23:06:21.108268 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.426029 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1b0d02-1292-4b66-a0ae-28273fcf65c8","Type":"ContainerStarted","Data":"aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.432050 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerStarted","Data":"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.432326 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerStarted","Data":"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.433932 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1aec611-46b0-46be-b951-fba1e8d2f282","Type":"ContainerStarted","Data":"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.434082 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b1aec611-46b0-46be-b951-fba1e8d2f282" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1" gracePeriod=30 Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.438134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerStarted","Data":"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.438180 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerStarted","Data":"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533"} Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.438291 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-metadata" containerID="cri-o://da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" gracePeriod=30 Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.438293 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-log" containerID="cri-o://fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" gracePeriod=30 Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.466861 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.474823867 podStartE2EDuration="5.466836848s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="2026-02-16 23:06:18.501555964 +0000 UTC m=+1218.825262925" lastFinishedPulling="2026-02-16 23:06:21.493568945 +0000 UTC m=+1221.817275906" observedRunningTime="2026-02-16 23:06:22.452571923 +0000 UTC m=+1222.776278884" watchObservedRunningTime="2026-02-16 23:06:22.466836848 +0000 UTC m=+1222.790543809" Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.479061 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.811480799 podStartE2EDuration="5.479039295s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="2026-02-16 23:06:18.834209092 +0000 UTC m=+1219.157916053" lastFinishedPulling="2026-02-16 23:06:21.501767588 +0000 UTC m=+1221.825474549" observedRunningTime="2026-02-16 23:06:22.476551984 +0000 UTC m=+1222.800258955" watchObservedRunningTime="2026-02-16 23:06:22.479039295 +0000 UTC m=+1222.802746256" Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.530622 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.755956891 podStartE2EDuration="5.530597008s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="2026-02-16 23:06:18.724719252 +0000 UTC m=+1219.048426213" lastFinishedPulling="2026-02-16 23:06:21.499359369 +0000 UTC m=+1221.823066330" observedRunningTime="2026-02-16 23:06:22.492047074 +0000 UTC m=+1222.815754035" watchObservedRunningTime="2026-02-16 23:06:22.530597008 +0000 UTC m=+1222.854303969" Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.544897 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.87386464 podStartE2EDuration="5.544872774s" podCreationTimestamp="2026-02-16 23:06:17 +0000 UTC" firstStartedPulling="2026-02-16 23:06:18.83309488 +0000 UTC m=+1219.156801831" lastFinishedPulling="2026-02-16 23:06:21.504103014 +0000 UTC m=+1221.827809965" observedRunningTime="2026-02-16 23:06:22.530953258 +0000 UTC m=+1222.854660209" watchObservedRunningTime="2026-02-16 23:06:22.544872774 +0000 UTC m=+1222.868579735" Feb 16 23:06:22 crc kubenswrapper[4865]: I0216 23:06:22.871125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.043686 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.090157 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.146500 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs\") pod \"982647c1-2356-472e-8e59-5000d74f65ab\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.146658 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle\") pod \"982647c1-2356-472e-8e59-5000d74f65ab\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.146796 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data\") pod \"982647c1-2356-472e-8e59-5000d74f65ab\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.146836 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2jr\" (UniqueName: \"kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr\") pod \"982647c1-2356-472e-8e59-5000d74f65ab\" (UID: \"982647c1-2356-472e-8e59-5000d74f65ab\") " Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.146951 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs" (OuterVolumeSpecName: "logs") pod "982647c1-2356-472e-8e59-5000d74f65ab" (UID: "982647c1-2356-472e-8e59-5000d74f65ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.147511 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982647c1-2356-472e-8e59-5000d74f65ab-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.152143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr" (OuterVolumeSpecName: "kube-api-access-6z2jr") pod "982647c1-2356-472e-8e59-5000d74f65ab" (UID: "982647c1-2356-472e-8e59-5000d74f65ab"). InnerVolumeSpecName "kube-api-access-6z2jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.178336 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "982647c1-2356-472e-8e59-5000d74f65ab" (UID: "982647c1-2356-472e-8e59-5000d74f65ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.185824 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data" (OuterVolumeSpecName: "config-data") pod "982647c1-2356-472e-8e59-5000d74f65ab" (UID: "982647c1-2356-472e-8e59-5000d74f65ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.250382 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.250430 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982647c1-2356-472e-8e59-5000d74f65ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.250453 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2jr\" (UniqueName: \"kubernetes.io/projected/982647c1-2356-472e-8e59-5000d74f65ab-kube-api-access-6z2jr\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.449598 4865 generic.go:334] "Generic (PLEG): container finished" podID="982647c1-2356-472e-8e59-5000d74f65ab" containerID="da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" exitCode=0 Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.449631 4865 generic.go:334] "Generic (PLEG): container finished" podID="982647c1-2356-472e-8e59-5000d74f65ab" containerID="fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" exitCode=143 Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.449919 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.450406 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerDied","Data":"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a"} Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.450436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerDied","Data":"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533"} Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.450446 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"982647c1-2356-472e-8e59-5000d74f65ab","Type":"ContainerDied","Data":"8629c457b5e8825e8f1f5599702495c04eed425c6e20b756b90bdd5c5c3e13bd"} Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.450460 4865 scope.go:117] "RemoveContainer" containerID="da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.477258 4865 scope.go:117] "RemoveContainer" containerID="fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.503772 4865 scope.go:117] "RemoveContainer" containerID="da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" Feb 16 23:06:23 crc kubenswrapper[4865]: E0216 23:06:23.504717 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a\": container with ID starting with da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a not found: ID does not exist" containerID="da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.504906 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a"} err="failed to get container status \"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a\": rpc error: code = NotFound desc = could not find container \"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a\": container with ID starting with da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a not found: ID does not exist" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.505085 4865 scope.go:117] "RemoveContainer" containerID="fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" Feb 16 23:06:23 crc kubenswrapper[4865]: E0216 23:06:23.505686 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533\": container with ID starting with fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533 not found: ID does not exist" containerID="fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.505784 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533"} err="failed to get container status \"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533\": rpc error: code = NotFound desc = could not find container \"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533\": container with ID starting with fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533 not found: ID does not exist" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.505818 4865 scope.go:117] "RemoveContainer" containerID="da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.506022 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a"} err="failed to get container status \"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a\": rpc error: code = NotFound desc = could not find container \"da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a\": container with ID starting with da481e232578ddbfd4681a6379d6f7209b31e88c3bd330712dd05b47ceaa2a7a not found: ID does not exist" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.506039 4865 scope.go:117] "RemoveContainer" containerID="fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.506825 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533"} err="failed to get container status \"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533\": rpc error: code = NotFound desc = could not find container \"fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533\": container with ID starting with fa75700efa0d1f9c8f15a92997714bf3cd1c9c39a74e1537f0470b7f4391f533 not found: ID does not exist" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.512119 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.523451 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.534570 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:23 crc kubenswrapper[4865]: E0216 23:06:23.535587 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-metadata" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.535690 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-metadata" Feb 16 23:06:23 crc kubenswrapper[4865]: E0216 23:06:23.535825 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-log" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.535908 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-log" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.536215 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-log" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.536338 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="982647c1-2356-472e-8e59-5000d74f65ab" containerName="nova-metadata-metadata" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.537773 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.542163 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.542653 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.543653 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.655563 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.655973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.656707 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.658785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.658824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5jq\" (UniqueName: \"kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.760150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.760242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.760345 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.760363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5jq\" (UniqueName: \"kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.760400 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.761026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.765688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.766825 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.770624 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.780989 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5jq\" (UniqueName: \"kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq\") pod \"nova-metadata-0\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " pod="openstack/nova-metadata-0" Feb 16 23:06:23 crc kubenswrapper[4865]: I0216 23:06:23.906446 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:24 crc kubenswrapper[4865]: I0216 23:06:24.387688 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:24 crc kubenswrapper[4865]: W0216 23:06:24.391975 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fdde880_3232_48b3_90f8_f47467f5e07c.slice/crio-19f9eec0cda169804d99c240c877a4c54204b22490f8f7437ddd64934d7399aa WatchSource:0}: Error finding container 19f9eec0cda169804d99c240c877a4c54204b22490f8f7437ddd64934d7399aa: Status 404 returned error can't find the container with id 19f9eec0cda169804d99c240c877a4c54204b22490f8f7437ddd64934d7399aa Feb 16 23:06:24 crc kubenswrapper[4865]: I0216 23:06:24.456093 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982647c1-2356-472e-8e59-5000d74f65ab" path="/var/lib/kubelet/pods/982647c1-2356-472e-8e59-5000d74f65ab/volumes" Feb 16 23:06:24 crc kubenswrapper[4865]: I0216 23:06:24.467014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerStarted","Data":"19f9eec0cda169804d99c240c877a4c54204b22490f8f7437ddd64934d7399aa"} Feb 16 23:06:25 crc kubenswrapper[4865]: I0216 23:06:25.481415 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerStarted","Data":"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7"} Feb 16 23:06:25 crc kubenswrapper[4865]: I0216 23:06:25.481856 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerStarted","Data":"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321"} Feb 16 23:06:25 crc kubenswrapper[4865]: I0216 23:06:25.541727 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.541688872 podStartE2EDuration="2.541688872s" podCreationTimestamp="2026-02-16 23:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:25.50603599 +0000 UTC m=+1225.829742961" watchObservedRunningTime="2026-02-16 23:06:25.541688872 +0000 UTC m=+1225.865395873" Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.515602 4865 generic.go:334] "Generic (PLEG): container finished" podID="e5a7bbbd-389f-49cb-b8b4-4a54280d034a" containerID="cc8be8ea2f4e8243c48e16f613b1c44ac6bb1df9db61237c737caeabf79e098c" exitCode=0 Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.515721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jzjtf" event={"ID":"e5a7bbbd-389f-49cb-b8b4-4a54280d034a","Type":"ContainerDied","Data":"cc8be8ea2f4e8243c48e16f613b1c44ac6bb1df9db61237c737caeabf79e098c"} Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.519949 4865 generic.go:334] "Generic (PLEG): container finished" podID="ee0920ef-18c3-4e01-b206-08b31472078a" containerID="a95aea6349f0ab21688e30bd9d52c833a8199feaad2386a6f4b0f78c830877a6" exitCode=0 Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.520032 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" event={"ID":"ee0920ef-18c3-4e01-b206-08b31472078a","Type":"ContainerDied","Data":"a95aea6349f0ab21688e30bd9d52c833a8199feaad2386a6f4b0f78c830877a6"} Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.870866 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 23:06:27 crc kubenswrapper[4865]: I0216 23:06:27.921662 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.072607 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.072699 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.216665 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.298215 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.298537 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="dnsmasq-dns" containerID="cri-o://7644709298f502212c13dd7d4f322537fee45453d0039691bab02cc33211e571" gracePeriod=10 Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.466976 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.543160 4865 generic.go:334] "Generic (PLEG): container finished" podID="89374364-2643-4580-9c44-14e3b944111f" containerID="7644709298f502212c13dd7d4f322537fee45453d0039691bab02cc33211e571" exitCode=0 Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.543391 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" event={"ID":"89374364-2643-4580-9c44-14e3b944111f","Type":"ContainerDied","Data":"7644709298f502212c13dd7d4f322537fee45453d0039691bab02cc33211e571"} Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.579968 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.868873 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.906875 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.906927 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986536 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986585 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986663 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986694 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.986881 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdz5\" (UniqueName: \"kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5\") pod \"89374364-2643-4580-9c44-14e3b944111f\" (UID: \"89374364-2643-4580-9c44-14e3b944111f\") " Feb 16 23:06:28 crc kubenswrapper[4865]: I0216 23:06:28.994671 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5" (OuterVolumeSpecName: "kube-api-access-zrdz5") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "kube-api-access-zrdz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.082080 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.085759 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.090034 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.090066 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.090075 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdz5\" (UniqueName: \"kubernetes.io/projected/89374364-2643-4580-9c44-14e3b944111f-kube-api-access-zrdz5\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.099091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.105685 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config" (OuterVolumeSpecName: "config") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.120397 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.121255 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.122863 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.133086 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89374364-2643-4580-9c44-14e3b944111f" (UID: "89374364-2643-4580-9c44-14e3b944111f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.161614 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192229 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data\") pod \"ee0920ef-18c3-4e01-b206-08b31472078a\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192310 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle\") pod \"ee0920ef-18c3-4e01-b206-08b31472078a\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192351 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data\") pod \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192376 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdsjq\" (UniqueName: \"kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq\") pod \"ee0920ef-18c3-4e01-b206-08b31472078a\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192442 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts\") pod \"ee0920ef-18c3-4e01-b206-08b31472078a\" (UID: \"ee0920ef-18c3-4e01-b206-08b31472078a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192501 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfhk4\" (UniqueName: \"kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4\") pod \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts\") pod \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.192579 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle\") pod \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\" (UID: \"e5a7bbbd-389f-49cb-b8b4-4a54280d034a\") " Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.193028 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.193046 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.193059 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89374364-2643-4580-9c44-14e3b944111f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.197385 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts" (OuterVolumeSpecName: "scripts") pod "e5a7bbbd-389f-49cb-b8b4-4a54280d034a" (UID: "e5a7bbbd-389f-49cb-b8b4-4a54280d034a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.197403 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts" (OuterVolumeSpecName: "scripts") pod "ee0920ef-18c3-4e01-b206-08b31472078a" (UID: "ee0920ef-18c3-4e01-b206-08b31472078a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.203846 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq" (OuterVolumeSpecName: "kube-api-access-wdsjq") pod "ee0920ef-18c3-4e01-b206-08b31472078a" (UID: "ee0920ef-18c3-4e01-b206-08b31472078a"). InnerVolumeSpecName "kube-api-access-wdsjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.206903 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4" (OuterVolumeSpecName: "kube-api-access-pfhk4") pod "e5a7bbbd-389f-49cb-b8b4-4a54280d034a" (UID: "e5a7bbbd-389f-49cb-b8b4-4a54280d034a"). InnerVolumeSpecName "kube-api-access-pfhk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.229196 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data" (OuterVolumeSpecName: "config-data") pod "ee0920ef-18c3-4e01-b206-08b31472078a" (UID: "ee0920ef-18c3-4e01-b206-08b31472078a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.235027 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data" (OuterVolumeSpecName: "config-data") pod "e5a7bbbd-389f-49cb-b8b4-4a54280d034a" (UID: "e5a7bbbd-389f-49cb-b8b4-4a54280d034a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.247466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a7bbbd-389f-49cb-b8b4-4a54280d034a" (UID: "e5a7bbbd-389f-49cb-b8b4-4a54280d034a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.275678 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee0920ef-18c3-4e01-b206-08b31472078a" (UID: "ee0920ef-18c3-4e01-b206-08b31472078a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295633 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295685 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295701 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295714 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295728 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295740 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdsjq\" (UniqueName: \"kubernetes.io/projected/ee0920ef-18c3-4e01-b206-08b31472078a-kube-api-access-wdsjq\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295753 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0920ef-18c3-4e01-b206-08b31472078a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.295764 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfhk4\" (UniqueName: \"kubernetes.io/projected/e5a7bbbd-389f-49cb-b8b4-4a54280d034a-kube-api-access-pfhk4\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.554681 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jzjtf" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.555205 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jzjtf" event={"ID":"e5a7bbbd-389f-49cb-b8b4-4a54280d034a","Type":"ContainerDied","Data":"a2853d6bd6a2d72f04145cff394266f61937dda1e4a4489e621f07944349b9de"} Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.555247 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2853d6bd6a2d72f04145cff394266f61937dda1e4a4489e621f07944349b9de" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.571476 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.572203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7lw7w" event={"ID":"ee0920ef-18c3-4e01-b206-08b31472078a","Type":"ContainerDied","Data":"5708b0fb196e8422ad31fa381cf2c7fa1f8bbea4ef409b74b1a2c9d273c7d18b"} Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.572309 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5708b0fb196e8422ad31fa381cf2c7fa1f8bbea4ef409b74b1a2c9d273c7d18b" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.577028 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.577338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-jqvgs" event={"ID":"89374364-2643-4580-9c44-14e3b944111f","Type":"ContainerDied","Data":"d2d362c6de8e545253279d35aa8ed383ff05d10411dcdd59880a7c2430489b1b"} Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.577423 4865 scope.go:117] "RemoveContainer" containerID="7644709298f502212c13dd7d4f322537fee45453d0039691bab02cc33211e571" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.618980 4865 scope.go:117] "RemoveContainer" containerID="ca6934ff2e0a441de9f11431397499e0a4f499b337ebf3d31a4170a18ab77512" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.648372 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.670331 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-jqvgs"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.686441 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 23:06:29 crc kubenswrapper[4865]: E0216 23:06:29.686902 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0920ef-18c3-4e01-b206-08b31472078a" containerName="nova-cell1-conductor-db-sync" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.686920 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0920ef-18c3-4e01-b206-08b31472078a" containerName="nova-cell1-conductor-db-sync" Feb 16 23:06:29 crc kubenswrapper[4865]: E0216 23:06:29.686936 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a7bbbd-389f-49cb-b8b4-4a54280d034a" containerName="nova-manage" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.686942 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a7bbbd-389f-49cb-b8b4-4a54280d034a" containerName="nova-manage" Feb 16 23:06:29 crc kubenswrapper[4865]: E0216 23:06:29.686955 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="init" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.686961 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="init" Feb 16 23:06:29 crc kubenswrapper[4865]: E0216 23:06:29.686995 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="dnsmasq-dns" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.687000 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="dnsmasq-dns" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.687181 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a7bbbd-389f-49cb-b8b4-4a54280d034a" containerName="nova-manage" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.687201 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0920ef-18c3-4e01-b206-08b31472078a" containerName="nova-cell1-conductor-db-sync" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.687222 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89374364-2643-4580-9c44-14e3b944111f" containerName="dnsmasq-dns" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.687929 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.690528 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.694689 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.737863 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.738084 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-log" containerID="cri-o://16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c" gracePeriod=30 Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.738168 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-api" containerID="cri-o://ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e" gracePeriod=30 Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.806425 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbtp\" (UniqueName: \"kubernetes.io/projected/e94e6b7d-55e6-4b25-9663-6cdc0440681f-kube-api-access-9bbtp\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.806499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.806526 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.807371 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.830532 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.830765 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-log" containerID="cri-o://65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" gracePeriod=30 Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.830845 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-metadata" containerID="cri-o://5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" gracePeriod=30 Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.908370 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbtp\" (UniqueName: \"kubernetes.io/projected/e94e6b7d-55e6-4b25-9663-6cdc0440681f-kube-api-access-9bbtp\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.908426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.908447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.912932 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.925054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbtp\" (UniqueName: \"kubernetes.io/projected/e94e6b7d-55e6-4b25-9663-6cdc0440681f-kube-api-access-9bbtp\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:29 crc kubenswrapper[4865]: I0216 23:06:29.925383 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e94e6b7d-55e6-4b25-9663-6cdc0440681f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e94e6b7d-55e6-4b25-9663-6cdc0440681f\") " pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.010757 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.366950 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.442081 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89374364-2643-4580-9c44-14e3b944111f" path="/var/lib/kubelet/pods/89374364-2643-4580-9c44-14e3b944111f/volumes" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.505500 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.525328 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs\") pod \"0fdde880-3232-48b3-90f8-f47467f5e07c\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.525393 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle\") pod \"0fdde880-3232-48b3-90f8-f47467f5e07c\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.525562 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs\") pod \"0fdde880-3232-48b3-90f8-f47467f5e07c\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.525611 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5jq\" (UniqueName: \"kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq\") pod \"0fdde880-3232-48b3-90f8-f47467f5e07c\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.525651 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data\") pod \"0fdde880-3232-48b3-90f8-f47467f5e07c\" (UID: \"0fdde880-3232-48b3-90f8-f47467f5e07c\") " Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.526025 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs" (OuterVolumeSpecName: "logs") pod "0fdde880-3232-48b3-90f8-f47467f5e07c" (UID: "0fdde880-3232-48b3-90f8-f47467f5e07c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.526853 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fdde880-3232-48b3-90f8-f47467f5e07c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.537429 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq" (OuterVolumeSpecName: "kube-api-access-lg5jq") pod "0fdde880-3232-48b3-90f8-f47467f5e07c" (UID: "0fdde880-3232-48b3-90f8-f47467f5e07c"). InnerVolumeSpecName "kube-api-access-lg5jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.566433 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fdde880-3232-48b3-90f8-f47467f5e07c" (UID: "0fdde880-3232-48b3-90f8-f47467f5e07c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.572250 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data" (OuterVolumeSpecName: "config-data") pod "0fdde880-3232-48b3-90f8-f47467f5e07c" (UID: "0fdde880-3232-48b3-90f8-f47467f5e07c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.589400 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0fdde880-3232-48b3-90f8-f47467f5e07c" (UID: "0fdde880-3232-48b3-90f8-f47467f5e07c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.594947 4865 generic.go:334] "Generic (PLEG): container finished" podID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerID="16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c" exitCode=143 Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.595003 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerDied","Data":"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c"} Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.596104 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e94e6b7d-55e6-4b25-9663-6cdc0440681f","Type":"ContainerStarted","Data":"6cbc559811ab6b9892aa12da4609429b4d8ce9650c0980b87c514a5c8ddef646"} Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603553 4865 generic.go:334] "Generic (PLEG): container finished" podID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerID="5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" exitCode=0 Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603574 4865 generic.go:334] "Generic (PLEG): container finished" podID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerID="65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" exitCode=143 Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603604 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerDied","Data":"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7"} Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603621 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerDied","Data":"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321"} Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603631 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0fdde880-3232-48b3-90f8-f47467f5e07c","Type":"ContainerDied","Data":"19f9eec0cda169804d99c240c877a4c54204b22490f8f7437ddd64934d7399aa"} Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603649 4865 scope.go:117] "RemoveContainer" containerID="5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.603744 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.612654 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerName="nova-scheduler-scheduler" containerID="cri-o://aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" gracePeriod=30 Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.628348 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg5jq\" (UniqueName: \"kubernetes.io/projected/0fdde880-3232-48b3-90f8-f47467f5e07c-kube-api-access-lg5jq\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.628381 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.628391 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.628400 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdde880-3232-48b3-90f8-f47467f5e07c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.655202 4865 scope.go:117] "RemoveContainer" containerID="65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.680621 4865 scope.go:117] "RemoveContainer" containerID="5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" Feb 16 23:06:30 crc kubenswrapper[4865]: E0216 23:06:30.681099 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7\": container with ID starting with 5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7 not found: ID does not exist" containerID="5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.681126 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7"} err="failed to get container status \"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7\": rpc error: code = NotFound desc = could not find container \"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7\": container with ID starting with 5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7 not found: ID does not exist" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.681146 4865 scope.go:117] "RemoveContainer" containerID="65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" Feb 16 23:06:30 crc kubenswrapper[4865]: E0216 23:06:30.681480 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321\": container with ID starting with 65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321 not found: ID does not exist" containerID="65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.681502 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321"} err="failed to get container status \"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321\": rpc error: code = NotFound desc = could not find container \"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321\": container with ID starting with 65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321 not found: ID does not exist" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.681515 4865 scope.go:117] "RemoveContainer" containerID="5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.682406 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7"} err="failed to get container status \"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7\": rpc error: code = NotFound desc = could not find container \"5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7\": container with ID starting with 5cb796e0c7ec3b62800e7ab4ae7f8ea69c90cb39ec9f42e7efa30d57876754f7 not found: ID does not exist" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.682426 4865 scope.go:117] "RemoveContainer" containerID="65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.683106 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321"} err="failed to get container status \"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321\": rpc error: code = NotFound desc = could not find container \"65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321\": container with ID starting with 65d5adddc217eed7c3967bdcbccae420c415043d720f6227baba916dd3835321 not found: ID does not exist" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.688573 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.706436 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.736246 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:30 crc kubenswrapper[4865]: E0216 23:06:30.748834 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-log" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.748877 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-log" Feb 16 23:06:30 crc kubenswrapper[4865]: E0216 23:06:30.748930 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-metadata" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.748939 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-metadata" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.750685 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-log" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.750738 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" containerName="nova-metadata-metadata" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.752245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.759632 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.759887 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.771171 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.835866 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.835970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.836003 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnw2g\" (UniqueName: \"kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.836050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.836084 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.938107 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.938214 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.938248 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnw2g\" (UniqueName: \"kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.938305 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.938339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.939073 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.943627 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.943696 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.943757 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:30 crc kubenswrapper[4865]: I0216 23:06:30.955115 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnw2g\" (UniqueName: \"kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g\") pod \"nova-metadata-0\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " pod="openstack/nova-metadata-0" Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.092432 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.592947 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:06:31 crc kubenswrapper[4865]: W0216 23:06:31.602642 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c0c342_0360_464e_8f9c_f1cfd619ea76.slice/crio-d91fea56cc892dae4b3f94777eccb09a9b4d8e07c9ceb394064b0edfaf5db93f WatchSource:0}: Error finding container d91fea56cc892dae4b3f94777eccb09a9b4d8e07c9ceb394064b0edfaf5db93f: Status 404 returned error can't find the container with id d91fea56cc892dae4b3f94777eccb09a9b4d8e07c9ceb394064b0edfaf5db93f Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.621340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerStarted","Data":"d91fea56cc892dae4b3f94777eccb09a9b4d8e07c9ceb394064b0edfaf5db93f"} Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.623072 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e94e6b7d-55e6-4b25-9663-6cdc0440681f","Type":"ContainerStarted","Data":"08009d78e9cc99809f2cfaa34d87bec2b3d8da927ba67312867531a17cc40054"} Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.624231 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:31 crc kubenswrapper[4865]: I0216 23:06:31.649780 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6497617890000003 podStartE2EDuration="2.649761789s" podCreationTimestamp="2026-02-16 23:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:31.638848008 +0000 UTC m=+1231.962554969" watchObservedRunningTime="2026-02-16 23:06:31.649761789 +0000 UTC m=+1231.973468760" Feb 16 23:06:32 crc kubenswrapper[4865]: I0216 23:06:32.437191 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdde880-3232-48b3-90f8-f47467f5e07c" path="/var/lib/kubelet/pods/0fdde880-3232-48b3-90f8-f47467f5e07c/volumes" Feb 16 23:06:32 crc kubenswrapper[4865]: I0216 23:06:32.637135 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerStarted","Data":"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4"} Feb 16 23:06:32 crc kubenswrapper[4865]: I0216 23:06:32.637198 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerStarted","Data":"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca"} Feb 16 23:06:32 crc kubenswrapper[4865]: I0216 23:06:32.667261 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6672372749999997 podStartE2EDuration="2.667237275s" podCreationTimestamp="2026-02-16 23:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:32.657470467 +0000 UTC m=+1232.981177448" watchObservedRunningTime="2026-02-16 23:06:32.667237275 +0000 UTC m=+1232.990944246" Feb 16 23:06:32 crc kubenswrapper[4865]: E0216 23:06:32.878028 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 23:06:32 crc kubenswrapper[4865]: E0216 23:06:32.879989 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 23:06:32 crc kubenswrapper[4865]: E0216 23:06:32.882311 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 23:06:32 crc kubenswrapper[4865]: E0216 23:06:32.882388 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerName="nova-scheduler-scheduler" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.025303 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.025644 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="abde8904-7323-4a9a-bad8-bc4993889ea7" containerName="kube-state-metrics" containerID="cri-o://88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69" gracePeriod=30 Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.658892 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.689781 4865 generic.go:334] "Generic (PLEG): container finished" podID="abde8904-7323-4a9a-bad8-bc4993889ea7" containerID="88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69" exitCode=2 Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.689905 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abde8904-7323-4a9a-bad8-bc4993889ea7","Type":"ContainerDied","Data":"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69"} Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.689933 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abde8904-7323-4a9a-bad8-bc4993889ea7","Type":"ContainerDied","Data":"58d5c585402a0530784b2979e9804e849e88768f50a6cea174293592b84b466a"} Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.689967 4865 scope.go:117] "RemoveContainer" containerID="88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.690142 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.700366 4865 generic.go:334] "Generic (PLEG): container finished" podID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerID="aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" exitCode=0 Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.701131 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1b0d02-1292-4b66-a0ae-28273fcf65c8","Type":"ContainerDied","Data":"aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f"} Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.705368 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzslq\" (UniqueName: \"kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq\") pod \"abde8904-7323-4a9a-bad8-bc4993889ea7\" (UID: \"abde8904-7323-4a9a-bad8-bc4993889ea7\") " Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.723221 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq" (OuterVolumeSpecName: "kube-api-access-dzslq") pod "abde8904-7323-4a9a-bad8-bc4993889ea7" (UID: "abde8904-7323-4a9a-bad8-bc4993889ea7"). InnerVolumeSpecName "kube-api-access-dzslq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.783756 4865 scope.go:117] "RemoveContainer" containerID="88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69" Feb 16 23:06:33 crc kubenswrapper[4865]: E0216 23:06:33.785978 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69\": container with ID starting with 88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69 not found: ID does not exist" containerID="88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.786047 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69"} err="failed to get container status \"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69\": rpc error: code = NotFound desc = could not find container \"88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69\": container with ID starting with 88657c880965259a811ad50044725a70eca406824fed27b5e6fcdfbbe1ccaf69 not found: ID does not exist" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.807643 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzslq\" (UniqueName: \"kubernetes.io/projected/abde8904-7323-4a9a-bad8-bc4993889ea7-kube-api-access-dzslq\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:33 crc kubenswrapper[4865]: I0216 23:06:33.996643 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.010454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data\") pod \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.010653 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle\") pod \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.010701 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrcxc\" (UniqueName: \"kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc\") pod \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\" (UID: \"4d1b0d02-1292-4b66-a0ae-28273fcf65c8\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.015217 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc" (OuterVolumeSpecName: "kube-api-access-hrcxc") pod "4d1b0d02-1292-4b66-a0ae-28273fcf65c8" (UID: "4d1b0d02-1292-4b66-a0ae-28273fcf65c8"). InnerVolumeSpecName "kube-api-access-hrcxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.056129 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1b0d02-1292-4b66-a0ae-28273fcf65c8" (UID: "4d1b0d02-1292-4b66-a0ae-28273fcf65c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.056579 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.070775 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data" (OuterVolumeSpecName: "config-data") pod "4d1b0d02-1292-4b66-a0ae-28273fcf65c8" (UID: "4d1b0d02-1292-4b66-a0ae-28273fcf65c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.079351 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.090355 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.090807 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerName="nova-scheduler-scheduler" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.090828 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerName="nova-scheduler-scheduler" Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.090858 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abde8904-7323-4a9a-bad8-bc4993889ea7" containerName="kube-state-metrics" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.090864 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="abde8904-7323-4a9a-bad8-bc4993889ea7" containerName="kube-state-metrics" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.091042 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" containerName="nova-scheduler-scheduler" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.091056 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="abde8904-7323-4a9a-bad8-bc4993889ea7" containerName="kube-state-metrics" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.091727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.096472 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.096594 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.107869 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9lb\" (UniqueName: \"kubernetes.io/projected/381c66d6-4d83-453d-bb97-35888127917f-kube-api-access-tx9lb\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114831 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114874 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114919 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114932 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.114942 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrcxc\" (UniqueName: \"kubernetes.io/projected/4d1b0d02-1292-4b66-a0ae-28273fcf65c8-kube-api-access-hrcxc\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.217548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.217656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx9lb\" (UniqueName: \"kubernetes.io/projected/381c66d6-4d83-453d-bb97-35888127917f-kube-api-access-tx9lb\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.217724 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.217789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.222230 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.222422 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.222912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/381c66d6-4d83-453d-bb97-35888127917f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.236205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx9lb\" (UniqueName: \"kubernetes.io/projected/381c66d6-4d83-453d-bb97-35888127917f-kube-api-access-tx9lb\") pod \"kube-state-metrics-0\" (UID: \"381c66d6-4d83-453d-bb97-35888127917f\") " pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.414914 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.428159 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abde8904-7323-4a9a-bad8-bc4993889ea7" path="/var/lib/kubelet/pods/abde8904-7323-4a9a-bad8-bc4993889ea7/volumes" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.627784 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.716556 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.716563 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d1b0d02-1292-4b66-a0ae-28273fcf65c8","Type":"ContainerDied","Data":"b7773c3c14b7344dd21ba3bd6cecadafe5485f58b46aa7fe0cb575c44af14313"} Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.716691 4865 scope.go:117] "RemoveContainer" containerID="aa798551aa8ca0bb8aaab6ab3dea207fee64a2410f8bd0fe5f7c96b562ad448f" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.721829 4865 generic.go:334] "Generic (PLEG): container finished" podID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerID="ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e" exitCode=0 Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.721878 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerDied","Data":"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e"} Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.721903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1286f79c-0f07-4d5f-ae60-f54c1d7d725f","Type":"ContainerDied","Data":"c28050e55929fa2ef807282db72349ec3ccfdfb0a2c5504ff3fbca8311012914"} Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.721972 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.724789 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle\") pod \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.724871 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data\") pod \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.725009 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lpbg\" (UniqueName: \"kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg\") pod \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.725030 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs\") pod \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\" (UID: \"1286f79c-0f07-4d5f-ae60-f54c1d7d725f\") " Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.725982 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs" (OuterVolumeSpecName: "logs") pod "1286f79c-0f07-4d5f-ae60-f54c1d7d725f" (UID: "1286f79c-0f07-4d5f-ae60-f54c1d7d725f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.732405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg" (OuterVolumeSpecName: "kube-api-access-5lpbg") pod "1286f79c-0f07-4d5f-ae60-f54c1d7d725f" (UID: "1286f79c-0f07-4d5f-ae60-f54c1d7d725f"). InnerVolumeSpecName "kube-api-access-5lpbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.754905 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.773451 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data" (OuterVolumeSpecName: "config-data") pod "1286f79c-0f07-4d5f-ae60-f54c1d7d725f" (UID: "1286f79c-0f07-4d5f-ae60-f54c1d7d725f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.779411 4865 scope.go:117] "RemoveContainer" containerID="ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.779552 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.793120 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1286f79c-0f07-4d5f-ae60-f54c1d7d725f" (UID: "1286f79c-0f07-4d5f-ae60-f54c1d7d725f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.795364 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.796130 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-log" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.796181 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-log" Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.796261 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-api" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.796303 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-api" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.796647 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-api" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.796672 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" containerName="nova-api-log" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.798055 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.800265 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.805470 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827498 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6pn\" (UniqueName: \"kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827608 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827630 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lpbg\" (UniqueName: \"kubernetes.io/projected/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-kube-api-access-5lpbg\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827641 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.827651 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1286f79c-0f07-4d5f-ae60-f54c1d7d725f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.865728 4865 scope.go:117] "RemoveContainer" containerID="16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.887260 4865 scope.go:117] "RemoveContainer" containerID="ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e" Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.888797 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e\": container with ID starting with ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e not found: ID does not exist" containerID="ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.888849 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e"} err="failed to get container status \"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e\": rpc error: code = NotFound desc = could not find container \"ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e\": container with ID starting with ca647d0608d240227f52c9a4742b7f9d926ebde3a855c8e6e5e26ab61a09ef3e not found: ID does not exist" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.888871 4865 scope.go:117] "RemoveContainer" containerID="16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c" Feb 16 23:06:34 crc kubenswrapper[4865]: E0216 23:06:34.889671 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c\": container with ID starting with 16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c not found: ID does not exist" containerID="16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.889939 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c"} err="failed to get container status \"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c\": rpc error: code = NotFound desc = could not find container \"16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c\": container with ID starting with 16ddb39c3c624745bf630fa558a29b89888839e59af9dc3e1c169e83f4ffe78c not found: ID does not exist" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.913537 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 23:06:34 crc kubenswrapper[4865]: W0216 23:06:34.915569 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381c66d6_4d83_453d_bb97_35888127917f.slice/crio-0e23a6d487749d9c663c991c6043e76b219cfbf526551faf34718596045c272a WatchSource:0}: Error finding container 0e23a6d487749d9c663c991c6043e76b219cfbf526551faf34718596045c272a: Status 404 returned error can't find the container with id 0e23a6d487749d9c663c991c6043e76b219cfbf526551faf34718596045c272a Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.929054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6pn\" (UniqueName: \"kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.929206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.929337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.933191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.933428 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:34 crc kubenswrapper[4865]: I0216 23:06:34.945731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6pn\" (UniqueName: \"kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn\") pod \"nova-scheduler-0\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " pod="openstack/nova-scheduler-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.054703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.063669 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.073218 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.083799 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.106098 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.116869 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.136328 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.136766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.136901 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.137077 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdcw\" (UniqueName: \"kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.153990 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.166669 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.192950 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.193206 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-central-agent" containerID="cri-o://ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650" gracePeriod=30 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.193887 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="proxy-httpd" containerID="cri-o://09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658" gracePeriod=30 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.194214 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-notification-agent" containerID="cri-o://97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870" gracePeriod=30 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.194491 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="sg-core" containerID="cri-o://03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e" gracePeriod=30 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.239357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.239409 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.239448 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvdcw\" (UniqueName: \"kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.239555 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.239848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.243853 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.249124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.255444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvdcw\" (UniqueName: \"kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw\") pod \"nova-api-0\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.440882 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.653079 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.758735 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.761661 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"259784bc-8bd2-46e1-bd86-53a5583adeac","Type":"ContainerStarted","Data":"1d51b3f9c8baf5619ee28ae5864eb3670318057ded65ca01ee3b481d7b380bdc"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.770338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"381c66d6-4d83-453d-bb97-35888127917f","Type":"ContainerStarted","Data":"559f70f8e5aa097e1174894134163a5f7e98b1ea039cc3f223e0eee9436befc9"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.770488 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"381c66d6-4d83-453d-bb97-35888127917f","Type":"ContainerStarted","Data":"0e23a6d487749d9c663c991c6043e76b219cfbf526551faf34718596045c272a"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.770580 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782813 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerID="09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658" exitCode=0 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782866 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerID="03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e" exitCode=2 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782876 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerID="ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650" exitCode=0 Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782874 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerDied","Data":"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782940 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerDied","Data":"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.782955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerDied","Data":"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650"} Feb 16 23:06:35 crc kubenswrapper[4865]: I0216 23:06:35.793481 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.412057218 podStartE2EDuration="1.793459681s" podCreationTimestamp="2026-02-16 23:06:34 +0000 UTC" firstStartedPulling="2026-02-16 23:06:34.918153797 +0000 UTC m=+1235.241860758" lastFinishedPulling="2026-02-16 23:06:35.29955626 +0000 UTC m=+1235.623263221" observedRunningTime="2026-02-16 23:06:35.789363805 +0000 UTC m=+1236.113070766" watchObservedRunningTime="2026-02-16 23:06:35.793459681 +0000 UTC m=+1236.117166642" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.093166 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.093571 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.426207 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1286f79c-0f07-4d5f-ae60-f54c1d7d725f" path="/var/lib/kubelet/pods/1286f79c-0f07-4d5f-ae60-f54c1d7d725f/volumes" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.427030 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1b0d02-1292-4b66-a0ae-28273fcf65c8" path="/var/lib/kubelet/pods/4d1b0d02-1292-4b66-a0ae-28273fcf65c8/volumes" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.805579 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"259784bc-8bd2-46e1-bd86-53a5583adeac","Type":"ContainerStarted","Data":"048d29ab491ba70fcb2c73b3269d198a067ee48fc4ee658a36a0cd66a7b77c86"} Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.810785 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerStarted","Data":"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86"} Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.810839 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerStarted","Data":"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2"} Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.810859 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerStarted","Data":"44423ac93cbbd86198644cfa2e7811db812fb21e78781bf94f2a593afb80fc4f"} Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.833713 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8336870039999997 podStartE2EDuration="2.833687004s" podCreationTimestamp="2026-02-16 23:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:36.824900515 +0000 UTC m=+1237.148607486" watchObservedRunningTime="2026-02-16 23:06:36.833687004 +0000 UTC m=+1237.157393965" Feb 16 23:06:36 crc kubenswrapper[4865]: I0216 23:06:36.858952 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.858929151 podStartE2EDuration="1.858929151s" podCreationTimestamp="2026-02-16 23:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:36.852014054 +0000 UTC m=+1237.175721025" watchObservedRunningTime="2026-02-16 23:06:36.858929151 +0000 UTC m=+1237.182636122" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.462662 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.493846 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.493958 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.494050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.494176 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.494218 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.494269 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.494376 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftcr\" (UniqueName: \"kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr\") pod \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\" (UID: \"99f27655-74d7-4d94-98d3-c9cceb3bdea0\") " Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.495602 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.496330 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.519334 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts" (OuterVolumeSpecName: "scripts") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.535597 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr" (OuterVolumeSpecName: "kube-api-access-lftcr") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "kube-api-access-lftcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.573421 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.596454 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftcr\" (UniqueName: \"kubernetes.io/projected/99f27655-74d7-4d94-98d3-c9cceb3bdea0-kube-api-access-lftcr\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.596485 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.596496 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.596505 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/99f27655-74d7-4d94-98d3-c9cceb3bdea0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.596513 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.671866 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.692524 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data" (OuterVolumeSpecName: "config-data") pod "99f27655-74d7-4d94-98d3-c9cceb3bdea0" (UID: "99f27655-74d7-4d94-98d3-c9cceb3bdea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.698792 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.698833 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f27655-74d7-4d94-98d3-c9cceb3bdea0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.829139 4865 generic.go:334] "Generic (PLEG): container finished" podID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerID="97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870" exitCode=0 Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.829203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerDied","Data":"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870"} Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.829293 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"99f27655-74d7-4d94-98d3-c9cceb3bdea0","Type":"ContainerDied","Data":"223c216fcaea908c2c50657c6bd851d46c4f0a645e5e325fa72da2ad200ecaef"} Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.829269 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.829320 4865 scope.go:117] "RemoveContainer" containerID="09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.869103 4865 scope.go:117] "RemoveContainer" containerID="03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.891887 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.921328 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.930274 4865 scope.go:117] "RemoveContainer" containerID="97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.935896 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:37 crc kubenswrapper[4865]: E0216 23:06:37.936801 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-central-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.936822 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-central-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: E0216 23:06:37.936861 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="proxy-httpd" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.936872 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="proxy-httpd" Feb 16 23:06:37 crc kubenswrapper[4865]: E0216 23:06:37.936918 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-notification-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.936929 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-notification-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: E0216 23:06:37.936955 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="sg-core" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.936965 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="sg-core" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.937318 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="sg-core" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.937349 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-central-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.937367 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="ceilometer-notification-agent" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.937386 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" containerName="proxy-httpd" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.940584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.967103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.967115 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.967244 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:06:37 crc kubenswrapper[4865]: I0216 23:06:37.995337 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019929 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.019954 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgmsz\" (UniqueName: \"kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.020059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.020595 4865 scope.go:117] "RemoveContainer" containerID="ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.041817 4865 scope.go:117] "RemoveContainer" containerID="09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658" Feb 16 23:06:38 crc kubenswrapper[4865]: E0216 23:06:38.042584 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658\": container with ID starting with 09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658 not found: ID does not exist" containerID="09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.042639 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658"} err="failed to get container status \"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658\": rpc error: code = NotFound desc = could not find container \"09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658\": container with ID starting with 09d268779daead4c1ec4486b7cd6a97e34a49eaed7afac86a6e0ae7d0edc5658 not found: ID does not exist" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.042670 4865 scope.go:117] "RemoveContainer" containerID="03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e" Feb 16 23:06:38 crc kubenswrapper[4865]: E0216 23:06:38.043084 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e\": container with ID starting with 03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e not found: ID does not exist" containerID="03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.043106 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e"} err="failed to get container status \"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e\": rpc error: code = NotFound desc = could not find container \"03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e\": container with ID starting with 03fc8fa199c2e9f8e5c8d57e0a3c698580a0e81a044174b479e9efac2ae7315e not found: ID does not exist" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.043118 4865 scope.go:117] "RemoveContainer" containerID="97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870" Feb 16 23:06:38 crc kubenswrapper[4865]: E0216 23:06:38.043326 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870\": container with ID starting with 97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870 not found: ID does not exist" containerID="97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.043345 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870"} err="failed to get container status \"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870\": rpc error: code = NotFound desc = could not find container \"97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870\": container with ID starting with 97f262c7992a498c778745f65e224a21244c5f2450082e3622c4011fe2e61870 not found: ID does not exist" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.043356 4865 scope.go:117] "RemoveContainer" containerID="ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650" Feb 16 23:06:38 crc kubenswrapper[4865]: E0216 23:06:38.043616 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650\": container with ID starting with ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650 not found: ID does not exist" containerID="ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.043635 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650"} err="failed to get container status \"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650\": rpc error: code = NotFound desc = could not find container \"ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650\": container with ID starting with ace1c431e6301c0d1070928fe2dc00e57dcf6db5df206f3a5e53f7e24ffcf650 not found: ID does not exist" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.120970 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121056 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121212 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121229 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgmsz\" (UniqueName: \"kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.121258 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.122054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.122450 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.125295 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.127107 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.127195 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.138308 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.138728 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgmsz\" (UniqueName: \"kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.145878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.299560 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.430266 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f27655-74d7-4d94-98d3-c9cceb3bdea0" path="/var/lib/kubelet/pods/99f27655-74d7-4d94-98d3-c9cceb3bdea0/volumes" Feb 16 23:06:38 crc kubenswrapper[4865]: W0216 23:06:38.770432 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5a8c90_3ad6_4b9a_a891_469023162a28.slice/crio-882cca687f04a5acc787da7eb6c4f2eda6bf5041e52e31cf3d8cb44e94738479 WatchSource:0}: Error finding container 882cca687f04a5acc787da7eb6c4f2eda6bf5041e52e31cf3d8cb44e94738479: Status 404 returned error can't find the container with id 882cca687f04a5acc787da7eb6c4f2eda6bf5041e52e31cf3d8cb44e94738479 Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.782567 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:38 crc kubenswrapper[4865]: I0216 23:06:38.845395 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerStarted","Data":"882cca687f04a5acc787da7eb6c4f2eda6bf5041e52e31cf3d8cb44e94738479"} Feb 16 23:06:39 crc kubenswrapper[4865]: I0216 23:06:39.857946 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerStarted","Data":"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd"} Feb 16 23:06:40 crc kubenswrapper[4865]: I0216 23:06:40.167650 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 23:06:40 crc kubenswrapper[4865]: I0216 23:06:40.881075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerStarted","Data":"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156"} Feb 16 23:06:41 crc kubenswrapper[4865]: I0216 23:06:41.093523 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 23:06:41 crc kubenswrapper[4865]: I0216 23:06:41.093566 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 23:06:41 crc kubenswrapper[4865]: I0216 23:06:41.891154 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerStarted","Data":"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec"} Feb 16 23:06:42 crc kubenswrapper[4865]: I0216 23:06:42.109478 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:42 crc kubenswrapper[4865]: I0216 23:06:42.109487 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:43 crc kubenswrapper[4865]: I0216 23:06:43.917798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerStarted","Data":"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947"} Feb 16 23:06:43 crc kubenswrapper[4865]: I0216 23:06:43.918595 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:06:43 crc kubenswrapper[4865]: I0216 23:06:43.952850 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.904089603 podStartE2EDuration="6.952826766s" podCreationTimestamp="2026-02-16 23:06:37 +0000 UTC" firstStartedPulling="2026-02-16 23:06:38.774576292 +0000 UTC m=+1239.098283263" lastFinishedPulling="2026-02-16 23:06:42.823313455 +0000 UTC m=+1243.147020426" observedRunningTime="2026-02-16 23:06:43.951538289 +0000 UTC m=+1244.275245290" watchObservedRunningTime="2026-02-16 23:06:43.952826766 +0000 UTC m=+1244.276533757" Feb 16 23:06:44 crc kubenswrapper[4865]: I0216 23:06:44.460312 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 23:06:45 crc kubenswrapper[4865]: I0216 23:06:45.168454 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 23:06:45 crc kubenswrapper[4865]: I0216 23:06:45.202420 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 23:06:45 crc kubenswrapper[4865]: I0216 23:06:45.442463 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:06:45 crc kubenswrapper[4865]: I0216 23:06:45.443020 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:06:45 crc kubenswrapper[4865]: I0216 23:06:45.998444 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 23:06:46 crc kubenswrapper[4865]: I0216 23:06:46.525499 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:46 crc kubenswrapper[4865]: I0216 23:06:46.525507 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:06:51 crc kubenswrapper[4865]: I0216 23:06:51.101634 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 23:06:51 crc kubenswrapper[4865]: I0216 23:06:51.113055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 23:06:51 crc kubenswrapper[4865]: I0216 23:06:51.115047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 23:06:52 crc kubenswrapper[4865]: I0216 23:06:52.046251 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 23:06:52 crc kubenswrapper[4865]: I0216 23:06:52.950665 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:52 crc kubenswrapper[4865]: I0216 23:06:52.995470 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle\") pod \"b1aec611-46b0-46be-b951-fba1e8d2f282\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " Feb 16 23:06:52 crc kubenswrapper[4865]: I0216 23:06:52.995657 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data\") pod \"b1aec611-46b0-46be-b951-fba1e8d2f282\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " Feb 16 23:06:52 crc kubenswrapper[4865]: I0216 23:06:52.995778 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6lzs\" (UniqueName: \"kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs\") pod \"b1aec611-46b0-46be-b951-fba1e8d2f282\" (UID: \"b1aec611-46b0-46be-b951-fba1e8d2f282\") " Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.002653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs" (OuterVolumeSpecName: "kube-api-access-k6lzs") pod "b1aec611-46b0-46be-b951-fba1e8d2f282" (UID: "b1aec611-46b0-46be-b951-fba1e8d2f282"). InnerVolumeSpecName "kube-api-access-k6lzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.036179 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data" (OuterVolumeSpecName: "config-data") pod "b1aec611-46b0-46be-b951-fba1e8d2f282" (UID: "b1aec611-46b0-46be-b951-fba1e8d2f282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.058096 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1aec611-46b0-46be-b951-fba1e8d2f282" (UID: "b1aec611-46b0-46be-b951-fba1e8d2f282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.061262 4865 generic.go:334] "Generic (PLEG): container finished" podID="b1aec611-46b0-46be-b951-fba1e8d2f282" containerID="bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1" exitCode=137 Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.061349 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.061440 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1aec611-46b0-46be-b951-fba1e8d2f282","Type":"ContainerDied","Data":"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1"} Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.061478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1aec611-46b0-46be-b951-fba1e8d2f282","Type":"ContainerDied","Data":"b447c91230d164371381b42b92f9967786503db19f510fb7a5582fb27fc23a9d"} Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.061500 4865 scope.go:117] "RemoveContainer" containerID="bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.098712 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.098746 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6lzs\" (UniqueName: \"kubernetes.io/projected/b1aec611-46b0-46be-b951-fba1e8d2f282-kube-api-access-k6lzs\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.098757 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1aec611-46b0-46be-b951-fba1e8d2f282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.124472 4865 scope.go:117] "RemoveContainer" containerID="bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.125017 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:53 crc kubenswrapper[4865]: E0216 23:06:53.125330 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1\": container with ID starting with bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1 not found: ID does not exist" containerID="bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.125366 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1"} err="failed to get container status \"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1\": rpc error: code = NotFound desc = could not find container \"bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1\": container with ID starting with bdecadcffe4ba7fb7c9f6dc96631226b5e2f90f8fa02df2b6d81ce7821b8d9e1 not found: ID does not exist" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.144187 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.156663 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:53 crc kubenswrapper[4865]: E0216 23:06:53.163536 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aec611-46b0-46be-b951-fba1e8d2f282" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.163584 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aec611-46b0-46be-b951-fba1e8d2f282" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.164199 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aec611-46b0-46be-b951-fba1e8d2f282" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.165722 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.168755 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.169016 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.169694 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.170289 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.200686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.200892 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.201101 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.201178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbm6j\" (UniqueName: \"kubernetes.io/projected/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-kube-api-access-xbm6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.201245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.302895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.303428 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.303543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbm6j\" (UniqueName: \"kubernetes.io/projected/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-kube-api-access-xbm6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.303981 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.304200 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.310061 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.310891 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.311466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.311854 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.320369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbm6j\" (UniqueName: \"kubernetes.io/projected/ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6-kube-api-access-xbm6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.486383 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:53 crc kubenswrapper[4865]: W0216 23:06:53.803679 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0a9df0_5b8e_4ea5_a4db_25ae964f56f6.slice/crio-b926d48414049afe0a7a9bb840bb0f4fd2abb8301bd30d791e5793201dd10dbe WatchSource:0}: Error finding container b926d48414049afe0a7a9bb840bb0f4fd2abb8301bd30d791e5793201dd10dbe: Status 404 returned error can't find the container with id b926d48414049afe0a7a9bb840bb0f4fd2abb8301bd30d791e5793201dd10dbe Feb 16 23:06:53 crc kubenswrapper[4865]: I0216 23:06:53.806141 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 23:06:54 crc kubenswrapper[4865]: I0216 23:06:54.093294 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6","Type":"ContainerStarted","Data":"b926d48414049afe0a7a9bb840bb0f4fd2abb8301bd30d791e5793201dd10dbe"} Feb 16 23:06:54 crc kubenswrapper[4865]: I0216 23:06:54.430784 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1aec611-46b0-46be-b951-fba1e8d2f282" path="/var/lib/kubelet/pods/b1aec611-46b0-46be-b951-fba1e8d2f282/volumes" Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.106774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6","Type":"ContainerStarted","Data":"683a1548dad0105354e32a9160ada9fbc5ac4e16f4e94e135c4197e643044d70"} Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.132212 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.132179466 podStartE2EDuration="2.132179466s" podCreationTimestamp="2026-02-16 23:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:55.126911146 +0000 UTC m=+1255.450618117" watchObservedRunningTime="2026-02-16 23:06:55.132179466 +0000 UTC m=+1255.455886437" Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.449436 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.449987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.452086 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 23:06:55 crc kubenswrapper[4865]: I0216 23:06:55.474917 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.117723 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.123769 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.381800 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.383762 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.407314 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.490482 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.490533 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.490588 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.490997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.491401 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n7f\" (UniqueName: \"kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.491632 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.593682 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n7f\" (UniqueName: \"kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.593805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.593882 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.593926 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.593985 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.594045 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.594992 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.595018 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.595023 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.595145 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.595240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.616767 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n7f\" (UniqueName: \"kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f\") pod \"dnsmasq-dns-59cf4bdb65-82l9f\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:56 crc kubenswrapper[4865]: I0216 23:06:56.721380 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:57 crc kubenswrapper[4865]: I0216 23:06:57.374485 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:06:57 crc kubenswrapper[4865]: W0216 23:06:57.382541 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f70611_ae1f_45d0_9688_7120ee736268.slice/crio-6b88b62914d488154a5042af390131fd22bbafe28ae552ec529e0865d0e22d11 WatchSource:0}: Error finding container 6b88b62914d488154a5042af390131fd22bbafe28ae552ec529e0865d0e22d11: Status 404 returned error can't find the container with id 6b88b62914d488154a5042af390131fd22bbafe28ae552ec529e0865d0e22d11 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.138811 4865 generic.go:334] "Generic (PLEG): container finished" podID="c2f70611-ae1f-45d0-9688-7120ee736268" containerID="36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0" exitCode=0 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.138914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" event={"ID":"c2f70611-ae1f-45d0-9688-7120ee736268","Type":"ContainerDied","Data":"36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0"} Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.139462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" event={"ID":"c2f70611-ae1f-45d0-9688-7120ee736268","Type":"ContainerStarted","Data":"6b88b62914d488154a5042af390131fd22bbafe28ae552ec529e0865d0e22d11"} Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.486994 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.675161 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.675921 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="sg-core" containerID="cri-o://fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec" gracePeriod=30 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.676070 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="proxy-httpd" containerID="cri-o://6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947" gracePeriod=30 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.676150 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-notification-agent" containerID="cri-o://cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156" gracePeriod=30 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.677107 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-central-agent" containerID="cri-o://589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd" gracePeriod=30 Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.699810 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 23:06:58 crc kubenswrapper[4865]: I0216 23:06:58.967030 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.149316 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerID="6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947" exitCode=0 Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.149343 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerID="fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec" exitCode=2 Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.149383 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerDied","Data":"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947"} Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.149411 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerDied","Data":"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec"} Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.150815 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-log" containerID="cri-o://a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2" gracePeriod=30 Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.152044 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" event={"ID":"c2f70611-ae1f-45d0-9688-7120ee736268","Type":"ContainerStarted","Data":"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66"} Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.152078 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.152430 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-api" containerID="cri-o://e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86" gracePeriod=30 Feb 16 23:06:59 crc kubenswrapper[4865]: I0216 23:06:59.183846 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" podStartSLOduration=3.183822841 podStartE2EDuration="3.183822841s" podCreationTimestamp="2026-02-16 23:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:06:59.180889928 +0000 UTC m=+1259.504596889" watchObservedRunningTime="2026-02-16 23:06:59.183822841 +0000 UTC m=+1259.507529792" Feb 16 23:07:00 crc kubenswrapper[4865]: I0216 23:07:00.163190 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerID="589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd" exitCode=0 Feb 16 23:07:00 crc kubenswrapper[4865]: I0216 23:07:00.163258 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerDied","Data":"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd"} Feb 16 23:07:00 crc kubenswrapper[4865]: I0216 23:07:00.165857 4865 generic.go:334] "Generic (PLEG): container finished" podID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerID="a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2" exitCode=143 Feb 16 23:07:00 crc kubenswrapper[4865]: I0216 23:07:00.166839 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerDied","Data":"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2"} Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.153295 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.188024 4865 generic.go:334] "Generic (PLEG): container finished" podID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerID="cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156" exitCode=0 Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.188088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerDied","Data":"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156"} Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.188121 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd5a8c90-3ad6-4b9a-a891-469023162a28","Type":"ContainerDied","Data":"882cca687f04a5acc787da7eb6c4f2eda6bf5041e52e31cf3d8cb44e94738479"} Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.188163 4865 scope.go:117] "RemoveContainer" containerID="6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.188161 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.230601 4865 scope.go:117] "RemoveContainer" containerID="fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.243795 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.243952 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgmsz\" (UniqueName: \"kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244069 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244106 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244173 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244200 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.244361 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts\") pod \"fd5a8c90-3ad6-4b9a-a891-469023162a28\" (UID: \"fd5a8c90-3ad6-4b9a-a891-469023162a28\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.245119 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.245755 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.254294 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts" (OuterVolumeSpecName: "scripts") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.258886 4865 scope.go:117] "RemoveContainer" containerID="cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.259554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz" (OuterVolumeSpecName: "kube-api-access-zgmsz") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "kube-api-access-zgmsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.289170 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.319506 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.346777 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.347032 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd5a8c90-3ad6-4b9a-a891-469023162a28-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.347043 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.347055 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.347064 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.347073 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgmsz\" (UniqueName: \"kubernetes.io/projected/fd5a8c90-3ad6-4b9a-a891-469023162a28-kube-api-access-zgmsz\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.368354 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.372362 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data" (OuterVolumeSpecName: "config-data") pod "fd5a8c90-3ad6-4b9a-a891-469023162a28" (UID: "fd5a8c90-3ad6-4b9a-a891-469023162a28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.446658 4865 scope.go:117] "RemoveContainer" containerID="589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.448887 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.448920 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a8c90-3ad6-4b9a-a891-469023162a28-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.488600 4865 scope.go:117] "RemoveContainer" containerID="6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.517260 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947\": container with ID starting with 6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947 not found: ID does not exist" containerID="6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.517336 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947"} err="failed to get container status \"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947\": rpc error: code = NotFound desc = could not find container \"6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947\": container with ID starting with 6950f17a24e6b5e2dfec7bf437f127e41c438ebe455c5d8f0f579a52219e2947 not found: ID does not exist" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.517370 4865 scope.go:117] "RemoveContainer" containerID="fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.518750 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec\": container with ID starting with fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec not found: ID does not exist" containerID="fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.518810 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec"} err="failed to get container status \"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec\": rpc error: code = NotFound desc = could not find container \"fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec\": container with ID starting with fb14653699cc60300f68d210cce2a8d5940ce5a06ecafbd966599b8000717eec not found: ID does not exist" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.518837 4865 scope.go:117] "RemoveContainer" containerID="cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.520162 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156\": container with ID starting with cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156 not found: ID does not exist" containerID="cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.520197 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156"} err="failed to get container status \"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156\": rpc error: code = NotFound desc = could not find container \"cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156\": container with ID starting with cd33a0f31ddda67dd373a67d82c02f832c6e05632540a5e9c67dca27c3818156 not found: ID does not exist" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.520221 4865 scope.go:117] "RemoveContainer" containerID="589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.522202 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd\": container with ID starting with 589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd not found: ID does not exist" containerID="589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.522230 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd"} err="failed to get container status \"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd\": rpc error: code = NotFound desc = could not find container \"589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd\": container with ID starting with 589ab08bfbf2d0844d89e34e857a05ac98eb51eb2c5357ffffb7988d388323cd not found: ID does not exist" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.526554 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.545375 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.562715 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.563130 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-central-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563149 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-central-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.563162 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="proxy-httpd" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563171 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="proxy-httpd" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.563194 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="sg-core" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563203 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="sg-core" Feb 16 23:07:02 crc kubenswrapper[4865]: E0216 23:07:02.563230 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-notification-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563238 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-notification-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563472 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="sg-core" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563494 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-central-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563511 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="proxy-httpd" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.563527 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" containerName="ceilometer-notification-agent" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.566043 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.569065 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.569563 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.569645 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.574964 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653087 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653236 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653333 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653378 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653443 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-scripts\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653476 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzb45\" (UniqueName: \"kubernetes.io/projected/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-kube-api-access-zzb45\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.653613 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-config-data\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755614 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-scripts\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755826 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755882 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzb45\" (UniqueName: \"kubernetes.io/projected/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-kube-api-access-zzb45\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.755935 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-config-data\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.756341 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.756950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.761210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-scripts\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.761877 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.762374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.764017 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-config-data\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.766375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.786384 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzb45\" (UniqueName: \"kubernetes.io/projected/8e6fcbf6-3f21-4134-9ace-bbbe418e9599-kube-api-access-zzb45\") pod \"ceilometer-0\" (UID: \"8e6fcbf6-3f21-4134-9ace-bbbe418e9599\") " pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.873991 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.884312 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.959390 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle\") pod \"af7a71ed-a637-434a-bbb3-10b4f90391eb\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.959746 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs\") pod \"af7a71ed-a637-434a-bbb3-10b4f90391eb\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.959823 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data\") pod \"af7a71ed-a637-434a-bbb3-10b4f90391eb\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.960235 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvdcw\" (UniqueName: \"kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw\") pod \"af7a71ed-a637-434a-bbb3-10b4f90391eb\" (UID: \"af7a71ed-a637-434a-bbb3-10b4f90391eb\") " Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.960358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs" (OuterVolumeSpecName: "logs") pod "af7a71ed-a637-434a-bbb3-10b4f90391eb" (UID: "af7a71ed-a637-434a-bbb3-10b4f90391eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.960873 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7a71ed-a637-434a-bbb3-10b4f90391eb-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:02 crc kubenswrapper[4865]: I0216 23:07:02.965365 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw" (OuterVolumeSpecName: "kube-api-access-pvdcw") pod "af7a71ed-a637-434a-bbb3-10b4f90391eb" (UID: "af7a71ed-a637-434a-bbb3-10b4f90391eb"). InnerVolumeSpecName "kube-api-access-pvdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.014083 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data" (OuterVolumeSpecName: "config-data") pod "af7a71ed-a637-434a-bbb3-10b4f90391eb" (UID: "af7a71ed-a637-434a-bbb3-10b4f90391eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.018267 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af7a71ed-a637-434a-bbb3-10b4f90391eb" (UID: "af7a71ed-a637-434a-bbb3-10b4f90391eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.062733 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvdcw\" (UniqueName: \"kubernetes.io/projected/af7a71ed-a637-434a-bbb3-10b4f90391eb-kube-api-access-pvdcw\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.062768 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.062778 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7a71ed-a637-434a-bbb3-10b4f90391eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.198966 4865 generic.go:334] "Generic (PLEG): container finished" podID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerID="e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86" exitCode=0 Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.199026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerDied","Data":"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86"} Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.199054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af7a71ed-a637-434a-bbb3-10b4f90391eb","Type":"ContainerDied","Data":"44423ac93cbbd86198644cfa2e7811db812fb21e78781bf94f2a593afb80fc4f"} Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.199077 4865 scope.go:117] "RemoveContainer" containerID="e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.199200 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.222237 4865 scope.go:117] "RemoveContainer" containerID="a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.241559 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.255805 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.268107 4865 scope.go:117] "RemoveContainer" containerID="e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.270684 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:03 crc kubenswrapper[4865]: E0216 23:07:03.271714 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-log" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.272019 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-log" Feb 16 23:07:03 crc kubenswrapper[4865]: E0216 23:07:03.272213 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-api" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.272230 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-api" Feb 16 23:07:03 crc kubenswrapper[4865]: E0216 23:07:03.272001 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86\": container with ID starting with e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86 not found: ID does not exist" containerID="e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.272295 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86"} err="failed to get container status \"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86\": rpc error: code = NotFound desc = could not find container \"e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86\": container with ID starting with e7fcdb1b6d568713adef0b4997e106d9d2c3f98552aa246f14c12f53ada98a86 not found: ID does not exist" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.272327 4865 scope.go:117] "RemoveContainer" containerID="a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2" Feb 16 23:07:03 crc kubenswrapper[4865]: E0216 23:07:03.274143 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2\": container with ID starting with a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2 not found: ID does not exist" containerID="a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.274240 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2"} err="failed to get container status \"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2\": rpc error: code = NotFound desc = could not find container \"a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2\": container with ID starting with a152bb7dbad2265361f987c683059e62b534cc71ae8d9062ec8391a02fb37ff2 not found: ID does not exist" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.276370 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-log" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.276413 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" containerName="nova-api-api" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.303532 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.305841 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.306521 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.306577 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.314769 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367552 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367616 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6bs\" (UniqueName: \"kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367661 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.367830 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.441934 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 23:07:03 crc kubenswrapper[4865]: W0216 23:07:03.442700 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6fcbf6_3f21_4134_9ace_bbbe418e9599.slice/crio-95517d018178350c6724a95bf4aac70f6dc3d00052eb537896e1973048fdc1e0 WatchSource:0}: Error finding container 95517d018178350c6724a95bf4aac70f6dc3d00052eb537896e1973048fdc1e0: Status 404 returned error can't find the container with id 95517d018178350c6724a95bf4aac70f6dc3d00052eb537896e1973048fdc1e0 Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.444833 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.469353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6bs\" (UniqueName: \"kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.469487 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.469524 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.469577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.470559 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.470980 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.471890 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.475137 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.475732 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.475857 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.477222 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.485107 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6bs\" (UniqueName: \"kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs\") pod \"nova-api-0\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " pod="openstack/nova-api-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.486586 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.510741 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:07:03 crc kubenswrapper[4865]: I0216 23:07:03.643375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.155368 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:04 crc kubenswrapper[4865]: W0216 23:07:04.163055 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f30e6f_579c_4dab_81eb_a02b46a4a463.slice/crio-8c63ca70e17309902ddb32c9261a58cf28ab35acc0610203b9bd73bee4dbc2ce WatchSource:0}: Error finding container 8c63ca70e17309902ddb32c9261a58cf28ab35acc0610203b9bd73bee4dbc2ce: Status 404 returned error can't find the container with id 8c63ca70e17309902ddb32c9261a58cf28ab35acc0610203b9bd73bee4dbc2ce Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.214079 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerStarted","Data":"8c63ca70e17309902ddb32c9261a58cf28ab35acc0610203b9bd73bee4dbc2ce"} Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.219169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6fcbf6-3f21-4134-9ace-bbbe418e9599","Type":"ContainerStarted","Data":"36d8f2a515979f06e0df3258f0572e4567923f8e82a811b2638ca40b159c58bc"} Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.219228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6fcbf6-3f21-4134-9ace-bbbe418e9599","Type":"ContainerStarted","Data":"95517d018178350c6724a95bf4aac70f6dc3d00052eb537896e1973048fdc1e0"} Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.245636 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.442697 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7a71ed-a637-434a-bbb3-10b4f90391eb" path="/var/lib/kubelet/pods/af7a71ed-a637-434a-bbb3-10b4f90391eb/volumes" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.443873 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5a8c90-3ad6-4b9a-a891-469023162a28" path="/var/lib/kubelet/pods/fd5a8c90-3ad6-4b9a-a891-469023162a28/volumes" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.498049 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-779v4"] Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.499691 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.507739 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.508065 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.510345 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-779v4"] Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.596767 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.596848 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.596881 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64hw\" (UniqueName: \"kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.597026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.698346 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.698434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.698463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.698488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d64hw\" (UniqueName: \"kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.703874 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.705123 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.716485 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.728096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64hw\" (UniqueName: \"kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw\") pod \"nova-cell1-cell-mapping-779v4\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:04 crc kubenswrapper[4865]: I0216 23:07:04.831266 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:05 crc kubenswrapper[4865]: I0216 23:07:05.245715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerStarted","Data":"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d"} Feb 16 23:07:05 crc kubenswrapper[4865]: I0216 23:07:05.246162 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerStarted","Data":"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738"} Feb 16 23:07:05 crc kubenswrapper[4865]: I0216 23:07:05.248843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6fcbf6-3f21-4134-9ace-bbbe418e9599","Type":"ContainerStarted","Data":"0c511aaae2f894733395ffa2551b1b844138b38e97d0536be5fba4d772e8416d"} Feb 16 23:07:05 crc kubenswrapper[4865]: I0216 23:07:05.268787 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.268762562 podStartE2EDuration="2.268762562s" podCreationTimestamp="2026-02-16 23:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:07:05.262725541 +0000 UTC m=+1265.586432512" watchObservedRunningTime="2026-02-16 23:07:05.268762562 +0000 UTC m=+1265.592469533" Feb 16 23:07:05 crc kubenswrapper[4865]: I0216 23:07:05.326654 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-779v4"] Feb 16 23:07:05 crc kubenswrapper[4865]: W0216 23:07:05.331535 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a147d5a_f8ee_4f4b_aebd_14f86e3547d0.slice/crio-b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e WatchSource:0}: Error finding container b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e: Status 404 returned error can't find the container with id b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.263050 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6fcbf6-3f21-4134-9ace-bbbe418e9599","Type":"ContainerStarted","Data":"11957cf3d5857a31e74316d1d36803ef1435d740d936f1fe5c96dc919b788dd6"} Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.272491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-779v4" event={"ID":"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0","Type":"ContainerStarted","Data":"689c26d0fa6d575a8f46a41f235dc5173c43a77f88315618d872a3de9fdf7b36"} Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.272560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-779v4" event={"ID":"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0","Type":"ContainerStarted","Data":"b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e"} Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.291442 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-779v4" podStartSLOduration=2.291425615 podStartE2EDuration="2.291425615s" podCreationTimestamp="2026-02-16 23:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:07:06.288260365 +0000 UTC m=+1266.611967356" watchObservedRunningTime="2026-02-16 23:07:06.291425615 +0000 UTC m=+1266.615132576" Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.722981 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.818043 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:07:06 crc kubenswrapper[4865]: I0216 23:07:06.820580 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="dnsmasq-dns" containerID="cri-o://645283c2e8e429e86a4407eb0e60cb4f27c931fe3a0a64d79266d0c4d79d74d3" gracePeriod=10 Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.291569 4865 generic.go:334] "Generic (PLEG): container finished" podID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerID="645283c2e8e429e86a4407eb0e60cb4f27c931fe3a0a64d79266d0c4d79d74d3" exitCode=0 Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.291572 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" event={"ID":"20c47fac-7072-47a1-a396-fdcc07153dc1","Type":"ContainerDied","Data":"645283c2e8e429e86a4407eb0e60cb4f27c931fe3a0a64d79266d0c4d79d74d3"} Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.304022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6fcbf6-3f21-4134-9ace-bbbe418e9599","Type":"ContainerStarted","Data":"ce457183a70f2528666e19af0b25c9c4f272ce0746acb212fc0a7d4c78242a78"} Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.331555 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.296507471 podStartE2EDuration="5.331534365s" podCreationTimestamp="2026-02-16 23:07:02 +0000 UTC" firstStartedPulling="2026-02-16 23:07:03.444646663 +0000 UTC m=+1263.768353614" lastFinishedPulling="2026-02-16 23:07:06.479673517 +0000 UTC m=+1266.803380508" observedRunningTime="2026-02-16 23:07:07.325899825 +0000 UTC m=+1267.649606786" watchObservedRunningTime="2026-02-16 23:07:07.331534365 +0000 UTC m=+1267.655241316" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.413193 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.575717 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mft\" (UniqueName: \"kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.575832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.575925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.576043 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.576133 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.576163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0\") pod \"20c47fac-7072-47a1-a396-fdcc07153dc1\" (UID: \"20c47fac-7072-47a1-a396-fdcc07153dc1\") " Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.590455 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft" (OuterVolumeSpecName: "kube-api-access-v9mft") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "kube-api-access-v9mft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.645659 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.655238 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.670395 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config" (OuterVolumeSpecName: "config") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.678572 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.678603 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.678614 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mft\" (UniqueName: \"kubernetes.io/projected/20c47fac-7072-47a1-a396-fdcc07153dc1-kube-api-access-v9mft\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.678622 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.679870 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.705135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20c47fac-7072-47a1-a396-fdcc07153dc1" (UID: "20c47fac-7072-47a1-a396-fdcc07153dc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.780575 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:07 crc kubenswrapper[4865]: I0216 23:07:07.780625 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20c47fac-7072-47a1-a396-fdcc07153dc1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.317104 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.325874 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-fztgd" event={"ID":"20c47fac-7072-47a1-a396-fdcc07153dc1","Type":"ContainerDied","Data":"ebe026f226050804e1b5c325268e67d7629d0b7903b955465a16a3412cf9e5fa"} Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.325912 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.326363 4865 scope.go:117] "RemoveContainer" containerID="645283c2e8e429e86a4407eb0e60cb4f27c931fe3a0a64d79266d0c4d79d74d3" Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.356009 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.367117 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-fztgd"] Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.367627 4865 scope.go:117] "RemoveContainer" containerID="309cf82e9581227a93a6dee8bb7425af8fe029ee5e40daa91438ad541a4145d3" Feb 16 23:07:08 crc kubenswrapper[4865]: I0216 23:07:08.426658 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" path="/var/lib/kubelet/pods/20c47fac-7072-47a1-a396-fdcc07153dc1/volumes" Feb 16 23:07:11 crc kubenswrapper[4865]: I0216 23:07:11.344738 4865 generic.go:334] "Generic (PLEG): container finished" podID="6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" containerID="689c26d0fa6d575a8f46a41f235dc5173c43a77f88315618d872a3de9fdf7b36" exitCode=0 Feb 16 23:07:11 crc kubenswrapper[4865]: I0216 23:07:11.344802 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-779v4" event={"ID":"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0","Type":"ContainerDied","Data":"689c26d0fa6d575a8f46a41f235dc5173c43a77f88315618d872a3de9fdf7b36"} Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.785901 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.881183 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data\") pod \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.881256 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts\") pod \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.881364 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d64hw\" (UniqueName: \"kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw\") pod \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.881728 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle\") pod \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\" (UID: \"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0\") " Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.889413 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts" (OuterVolumeSpecName: "scripts") pod "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" (UID: "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.893491 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw" (OuterVolumeSpecName: "kube-api-access-d64hw") pod "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" (UID: "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0"). InnerVolumeSpecName "kube-api-access-d64hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.939753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" (UID: "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.940781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data" (OuterVolumeSpecName: "config-data") pod "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" (UID: "6a147d5a-f8ee-4f4b-aebd-14f86e3547d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.993969 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.994298 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.994402 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d64hw\" (UniqueName: \"kubernetes.io/projected/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-kube-api-access-d64hw\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:12 crc kubenswrapper[4865]: I0216 23:07:12.994488 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.399808 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-779v4" event={"ID":"6a147d5a-f8ee-4f4b-aebd-14f86e3547d0","Type":"ContainerDied","Data":"b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e"} Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.400147 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8aa57fae4bfa7c9a6b2329b55d433bd06dfe6ffa72a4f22d3ed820aca869b5e" Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.399899 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-779v4" Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.609996 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.610339 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="259784bc-8bd2-46e1-bd86-53a5583adeac" containerName="nova-scheduler-scheduler" containerID="cri-o://048d29ab491ba70fcb2c73b3269d198a067ee48fc4ee658a36a0cd66a7b77c86" gracePeriod=30 Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.618989 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.619252 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-log" containerID="cri-o://90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" gracePeriod=30 Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.619303 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-api" containerID="cri-o://efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" gracePeriod=30 Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.630503 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.630727 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" containerID="cri-o://fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca" gracePeriod=30 Feb 16 23:07:13 crc kubenswrapper[4865]: I0216 23:07:13.630880 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" containerID="cri-o://ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4" gracePeriod=30 Feb 16 23:07:13 crc kubenswrapper[4865]: E0216 23:07:13.855937 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f30e6f_579c_4dab_81eb_a02b46a4a463.slice/crio-90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c0c342_0360_464e_8f9c_f1cfd619ea76.slice/crio-fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c0c342_0360_464e_8f9c_f1cfd619ea76.slice/crio-conmon-fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f30e6f_579c_4dab_81eb_a02b46a4a463.slice/crio-conmon-efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d.scope\": RecentStats: unable to find data in memory cache]" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.343036 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.423807 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.423858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.423889 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz6bs\" (UniqueName: \"kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.423962 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.424051 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.424082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle\") pod \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\" (UID: \"f0f30e6f-579c-4dab-81eb-a02b46a4a463\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.424805 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs" (OuterVolumeSpecName: "logs") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.430485 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0f30e6f-579c-4dab-81eb-a02b46a4a463-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.441850 4865 generic.go:334] "Generic (PLEG): container finished" podID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerID="efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" exitCode=0 Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.441877 4865 generic.go:334] "Generic (PLEG): container finished" podID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerID="90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" exitCode=143 Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.441943 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerDied","Data":"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d"} Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.441970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerDied","Data":"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738"} Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.441983 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0f30e6f-579c-4dab-81eb-a02b46a4a463","Type":"ContainerDied","Data":"8c63ca70e17309902ddb32c9261a58cf28ab35acc0610203b9bd73bee4dbc2ce"} Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.442000 4865 scope.go:117] "RemoveContainer" containerID="efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.442181 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.442580 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs" (OuterVolumeSpecName: "kube-api-access-cz6bs") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "kube-api-access-cz6bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.462579 4865 generic.go:334] "Generic (PLEG): container finished" podID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerID="fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca" exitCode=143 Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.462708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerDied","Data":"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca"} Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.470089 4865 generic.go:334] "Generic (PLEG): container finished" podID="259784bc-8bd2-46e1-bd86-53a5583adeac" containerID="048d29ab491ba70fcb2c73b3269d198a067ee48fc4ee658a36a0cd66a7b77c86" exitCode=0 Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.470151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"259784bc-8bd2-46e1-bd86-53a5583adeac","Type":"ContainerDied","Data":"048d29ab491ba70fcb2c73b3269d198a067ee48fc4ee658a36a0cd66a7b77c86"} Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.496135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.508316 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data" (OuterVolumeSpecName: "config-data") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.521180 4865 scope.go:117] "RemoveContainer" containerID="90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.532751 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.532780 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz6bs\" (UniqueName: \"kubernetes.io/projected/f0f30e6f-579c-4dab-81eb-a02b46a4a463-kube-api-access-cz6bs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.532792 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.536400 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.536438 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0f30e6f-579c-4dab-81eb-a02b46a4a463" (UID: "f0f30e6f-579c-4dab-81eb-a02b46a4a463"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.547379 4865 scope.go:117] "RemoveContainer" containerID="efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.548026 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d\": container with ID starting with efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d not found: ID does not exist" containerID="efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.548063 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d"} err="failed to get container status \"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d\": rpc error: code = NotFound desc = could not find container \"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d\": container with ID starting with efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d not found: ID does not exist" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.548092 4865 scope.go:117] "RemoveContainer" containerID="90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.550504 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738\": container with ID starting with 90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738 not found: ID does not exist" containerID="90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.550982 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738"} err="failed to get container status \"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738\": rpc error: code = NotFound desc = could not find container \"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738\": container with ID starting with 90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738 not found: ID does not exist" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.551014 4865 scope.go:117] "RemoveContainer" containerID="efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.551448 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d"} err="failed to get container status \"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d\": rpc error: code = NotFound desc = could not find container \"efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d\": container with ID starting with efadf9e54cf9f7bfaadb939a61abc786b123b047c6976bc01887dba0c178e70d not found: ID does not exist" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.551482 4865 scope.go:117] "RemoveContainer" containerID="90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.551747 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738"} err="failed to get container status \"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738\": rpc error: code = NotFound desc = could not find container \"90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738\": container with ID starting with 90d24925473eb58fabbe33484c3fd806a3f8281ee3109c60bec3dbf4f5fbf738 not found: ID does not exist" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.634434 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.634468 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0f30e6f-579c-4dab-81eb-a02b46a4a463-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.658155 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.735958 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle\") pod \"259784bc-8bd2-46e1-bd86-53a5583adeac\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.736289 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data\") pod \"259784bc-8bd2-46e1-bd86-53a5583adeac\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.736408 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs6pn\" (UniqueName: \"kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn\") pod \"259784bc-8bd2-46e1-bd86-53a5583adeac\" (UID: \"259784bc-8bd2-46e1-bd86-53a5583adeac\") " Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.741951 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn" (OuterVolumeSpecName: "kube-api-access-vs6pn") pod "259784bc-8bd2-46e1-bd86-53a5583adeac" (UID: "259784bc-8bd2-46e1-bd86-53a5583adeac"). InnerVolumeSpecName "kube-api-access-vs6pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.771346 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "259784bc-8bd2-46e1-bd86-53a5583adeac" (UID: "259784bc-8bd2-46e1-bd86-53a5583adeac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.773394 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data" (OuterVolumeSpecName: "config-data") pod "259784bc-8bd2-46e1-bd86-53a5583adeac" (UID: "259784bc-8bd2-46e1-bd86-53a5583adeac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.839075 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.839112 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/259784bc-8bd2-46e1-bd86-53a5583adeac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.839123 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs6pn\" (UniqueName: \"kubernetes.io/projected/259784bc-8bd2-46e1-bd86-53a5583adeac-kube-api-access-vs6pn\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.867913 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.877871 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.893761 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894209 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="init" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894228 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="init" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894252 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-api" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894261 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-api" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894270 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="dnsmasq-dns" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894292 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="dnsmasq-dns" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894307 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-log" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894315 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-log" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894332 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" containerName="nova-manage" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894339 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" containerName="nova-manage" Feb 16 23:07:14 crc kubenswrapper[4865]: E0216 23:07:14.894359 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259784bc-8bd2-46e1-bd86-53a5583adeac" containerName="nova-scheduler-scheduler" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894365 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="259784bc-8bd2-46e1-bd86-53a5583adeac" containerName="nova-scheduler-scheduler" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894532 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-log" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894543 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c47fac-7072-47a1-a396-fdcc07153dc1" containerName="dnsmasq-dns" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894554 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" containerName="nova-manage" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894566 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="259784bc-8bd2-46e1-bd86-53a5583adeac" containerName="nova-scheduler-scheduler" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.894584 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" containerName="nova-api-api" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.895569 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.902584 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.902584 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.903576 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.905917 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.940872 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.940936 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.940970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-config-data\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.941050 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.941078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6rt\" (UniqueName: \"kubernetes.io/projected/5a4b1542-e38f-4ebf-9ca9-028ced41d506-kube-api-access-8c6rt\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:14 crc kubenswrapper[4865]: I0216 23:07:14.941096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a4b1542-e38f-4ebf-9ca9-028ced41d506-logs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.042967 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6rt\" (UniqueName: \"kubernetes.io/projected/5a4b1542-e38f-4ebf-9ca9-028ced41d506-kube-api-access-8c6rt\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.043028 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a4b1542-e38f-4ebf-9ca9-028ced41d506-logs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.043194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.043228 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.043252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-config-data\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.043371 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.044027 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a4b1542-e38f-4ebf-9ca9-028ced41d506-logs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.048302 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-public-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.048371 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.048878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-config-data\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.051051 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4b1542-e38f-4ebf-9ca9-028ced41d506-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.068689 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6rt\" (UniqueName: \"kubernetes.io/projected/5a4b1542-e38f-4ebf-9ca9-028ced41d506-kube-api-access-8c6rt\") pod \"nova-api-0\" (UID: \"5a4b1542-e38f-4ebf-9ca9-028ced41d506\") " pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.215047 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.481097 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"259784bc-8bd2-46e1-bd86-53a5583adeac","Type":"ContainerDied","Data":"1d51b3f9c8baf5619ee28ae5864eb3670318057ded65ca01ee3b481d7b380bdc"} Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.481358 4865 scope.go:117] "RemoveContainer" containerID="048d29ab491ba70fcb2c73b3269d198a067ee48fc4ee658a36a0cd66a7b77c86" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.481199 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.530402 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.551199 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.566148 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.567766 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.571714 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.583616 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.657140 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-config-data\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.657256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.657335 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbmm\" (UniqueName: \"kubernetes.io/projected/a0810695-85aa-432f-8a1d-f5bf69077393-kube-api-access-tfbmm\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.664006 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.664069 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:07:15 crc kubenswrapper[4865]: W0216 23:07:15.719323 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4b1542_e38f_4ebf_9ca9_028ced41d506.slice/crio-74e62522e54da18222a4a8a070241e16052e64f862a176de4d3a6e6d62501147 WatchSource:0}: Error finding container 74e62522e54da18222a4a8a070241e16052e64f862a176de4d3a6e6d62501147: Status 404 returned error can't find the container with id 74e62522e54da18222a4a8a070241e16052e64f862a176de4d3a6e6d62501147 Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.722439 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.759362 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-config-data\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.759437 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.759493 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbmm\" (UniqueName: \"kubernetes.io/projected/a0810695-85aa-432f-8a1d-f5bf69077393-kube-api-access-tfbmm\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.766124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-config-data\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.766152 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0810695-85aa-432f-8a1d-f5bf69077393-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.783587 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbmm\" (UniqueName: \"kubernetes.io/projected/a0810695-85aa-432f-8a1d-f5bf69077393-kube-api-access-tfbmm\") pod \"nova-scheduler-0\" (UID: \"a0810695-85aa-432f-8a1d-f5bf69077393\") " pod="openstack/nova-scheduler-0" Feb 16 23:07:15 crc kubenswrapper[4865]: I0216 23:07:15.885855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.431826 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259784bc-8bd2-46e1-bd86-53a5583adeac" path="/var/lib/kubelet/pods/259784bc-8bd2-46e1-bd86-53a5583adeac/volumes" Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.433330 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f30e6f-579c-4dab-81eb-a02b46a4a463" path="/var/lib/kubelet/pods/f0f30e6f-579c-4dab-81eb-a02b46a4a463/volumes" Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.435146 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 23:07:16 crc kubenswrapper[4865]: W0216 23:07:16.440991 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0810695_85aa_432f_8a1d_f5bf69077393.slice/crio-875c16000b496421198d20086f1f934b0650122e10cb65ab6a9bd8f2b298c995 WatchSource:0}: Error finding container 875c16000b496421198d20086f1f934b0650122e10cb65ab6a9bd8f2b298c995: Status 404 returned error can't find the container with id 875c16000b496421198d20086f1f934b0650122e10cb65ab6a9bd8f2b298c995 Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.499410 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a4b1542-e38f-4ebf-9ca9-028ced41d506","Type":"ContainerStarted","Data":"6cd0bd1db4cc28284babc62d36c01239f53f6f662bdf58d44984eaceb7e7b9a9"} Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.499477 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a4b1542-e38f-4ebf-9ca9-028ced41d506","Type":"ContainerStarted","Data":"1fa9711d2e01489f9929004453ebf903a874d761595664971b947f8c79572713"} Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.499495 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5a4b1542-e38f-4ebf-9ca9-028ced41d506","Type":"ContainerStarted","Data":"74e62522e54da18222a4a8a070241e16052e64f862a176de4d3a6e6d62501147"} Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.501938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0810695-85aa-432f-8a1d-f5bf69077393","Type":"ContainerStarted","Data":"875c16000b496421198d20086f1f934b0650122e10cb65ab6a9bd8f2b298c995"} Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.531765 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.531734719 podStartE2EDuration="2.531734719s" podCreationTimestamp="2026-02-16 23:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:07:16.525547913 +0000 UTC m=+1276.849254914" watchObservedRunningTime="2026-02-16 23:07:16.531734719 +0000 UTC m=+1276.855441700" Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.782796 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:37198->10.217.0.195:8775: read: connection reset by peer" Feb 16 23:07:16 crc kubenswrapper[4865]: I0216 23:07:16.782917 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:37200->10.217.0.195:8775: read: connection reset by peer" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.347716 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.399404 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnw2g\" (UniqueName: \"kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g\") pod \"82c0c342-0360-464e-8f9c-f1cfd619ea76\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.399541 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs\") pod \"82c0c342-0360-464e-8f9c-f1cfd619ea76\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.399601 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs\") pod \"82c0c342-0360-464e-8f9c-f1cfd619ea76\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.399677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle\") pod \"82c0c342-0360-464e-8f9c-f1cfd619ea76\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.399716 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data\") pod \"82c0c342-0360-464e-8f9c-f1cfd619ea76\" (UID: \"82c0c342-0360-464e-8f9c-f1cfd619ea76\") " Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.415526 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs" (OuterVolumeSpecName: "logs") pod "82c0c342-0360-464e-8f9c-f1cfd619ea76" (UID: "82c0c342-0360-464e-8f9c-f1cfd619ea76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.432103 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g" (OuterVolumeSpecName: "kube-api-access-nnw2g") pod "82c0c342-0360-464e-8f9c-f1cfd619ea76" (UID: "82c0c342-0360-464e-8f9c-f1cfd619ea76"). InnerVolumeSpecName "kube-api-access-nnw2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.453576 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data" (OuterVolumeSpecName: "config-data") pod "82c0c342-0360-464e-8f9c-f1cfd619ea76" (UID: "82c0c342-0360-464e-8f9c-f1cfd619ea76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.471191 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "82c0c342-0360-464e-8f9c-f1cfd619ea76" (UID: "82c0c342-0360-464e-8f9c-f1cfd619ea76"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.477674 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82c0c342-0360-464e-8f9c-f1cfd619ea76" (UID: "82c0c342-0360-464e-8f9c-f1cfd619ea76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.502373 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.502401 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.502411 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c0c342-0360-464e-8f9c-f1cfd619ea76-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.502421 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnw2g\" (UniqueName: \"kubernetes.io/projected/82c0c342-0360-464e-8f9c-f1cfd619ea76-kube-api-access-nnw2g\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.502429 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c0c342-0360-464e-8f9c-f1cfd619ea76-logs\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.515455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0810695-85aa-432f-8a1d-f5bf69077393","Type":"ContainerStarted","Data":"3462a9592c29e392db8cf0b36f209155bc27981b6d51d5d5beecefe27b2aa736"} Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.521782 4865 generic.go:334] "Generic (PLEG): container finished" podID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerID="ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4" exitCode=0 Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.522605 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.523630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerDied","Data":"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4"} Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.523709 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"82c0c342-0360-464e-8f9c-f1cfd619ea76","Type":"ContainerDied","Data":"d91fea56cc892dae4b3f94777eccb09a9b4d8e07c9ceb394064b0edfaf5db93f"} Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.523735 4865 scope.go:117] "RemoveContainer" containerID="ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.548581 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5485554969999997 podStartE2EDuration="2.548555497s" podCreationTimestamp="2026-02-16 23:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:07:17.540831167 +0000 UTC m=+1277.864538128" watchObservedRunningTime="2026-02-16 23:07:17.548555497 +0000 UTC m=+1277.872262458" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.564795 4865 scope.go:117] "RemoveContainer" containerID="fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.577122 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.585563 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.608832 4865 scope.go:117] "RemoveContainer" containerID="ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4" Feb 16 23:07:17 crc kubenswrapper[4865]: E0216 23:07:17.609395 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4\": container with ID starting with ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4 not found: ID does not exist" containerID="ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.609436 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4"} err="failed to get container status \"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4\": rpc error: code = NotFound desc = could not find container \"ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4\": container with ID starting with ba774aea146b6cbe19002243011dfb20af41640e9620bb35b6dd76fc0e9352d4 not found: ID does not exist" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.609463 4865 scope.go:117] "RemoveContainer" containerID="fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca" Feb 16 23:07:17 crc kubenswrapper[4865]: E0216 23:07:17.610234 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca\": container with ID starting with fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca not found: ID does not exist" containerID="fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.610270 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca"} err="failed to get container status \"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca\": rpc error: code = NotFound desc = could not find container \"fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca\": container with ID starting with fc10a95ad9a28eb233723e7d5807826fa44d0c2efce548dcecc71ff870e5b0ca not found: ID does not exist" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.617514 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:17 crc kubenswrapper[4865]: E0216 23:07:17.618008 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.618029 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" Feb 16 23:07:17 crc kubenswrapper[4865]: E0216 23:07:17.618054 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.618061 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.618238 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-metadata" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.618264 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" containerName="nova-metadata-log" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.619286 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.622959 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.623166 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.641564 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.708913 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.708958 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/811614ef-6229-489e-8da4-e1d4b1a5d5fd-logs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.708988 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.709168 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66x86\" (UniqueName: \"kubernetes.io/projected/811614ef-6229-489e-8da4-e1d4b1a5d5fd-kube-api-access-66x86\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.709915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-config-data\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.811996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.812046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66x86\" (UniqueName: \"kubernetes.io/projected/811614ef-6229-489e-8da4-e1d4b1a5d5fd-kube-api-access-66x86\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.812183 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-config-data\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.812221 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.812242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/811614ef-6229-489e-8da4-e1d4b1a5d5fd-logs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.812702 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/811614ef-6229-489e-8da4-e1d4b1a5d5fd-logs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.815509 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.816031 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.824571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/811614ef-6229-489e-8da4-e1d4b1a5d5fd-config-data\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.829350 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66x86\" (UniqueName: \"kubernetes.io/projected/811614ef-6229-489e-8da4-e1d4b1a5d5fd-kube-api-access-66x86\") pod \"nova-metadata-0\" (UID: \"811614ef-6229-489e-8da4-e1d4b1a5d5fd\") " pod="openstack/nova-metadata-0" Feb 16 23:07:17 crc kubenswrapper[4865]: I0216 23:07:17.957082 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 23:07:18 crc kubenswrapper[4865]: I0216 23:07:18.427099 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c0c342-0360-464e-8f9c-f1cfd619ea76" path="/var/lib/kubelet/pods/82c0c342-0360-464e-8f9c-f1cfd619ea76/volumes" Feb 16 23:07:18 crc kubenswrapper[4865]: I0216 23:07:18.481803 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 23:07:18 crc kubenswrapper[4865]: W0216 23:07:18.483477 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod811614ef_6229_489e_8da4_e1d4b1a5d5fd.slice/crio-01d32f8d4c265f9d653ae873e23ca55d8e8bb815bb4fccc69f263615ae718539 WatchSource:0}: Error finding container 01d32f8d4c265f9d653ae873e23ca55d8e8bb815bb4fccc69f263615ae718539: Status 404 returned error can't find the container with id 01d32f8d4c265f9d653ae873e23ca55d8e8bb815bb4fccc69f263615ae718539 Feb 16 23:07:18 crc kubenswrapper[4865]: I0216 23:07:18.546004 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"811614ef-6229-489e-8da4-e1d4b1a5d5fd","Type":"ContainerStarted","Data":"01d32f8d4c265f9d653ae873e23ca55d8e8bb815bb4fccc69f263615ae718539"} Feb 16 23:07:19 crc kubenswrapper[4865]: I0216 23:07:19.566927 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"811614ef-6229-489e-8da4-e1d4b1a5d5fd","Type":"ContainerStarted","Data":"4adde11438b0597a8be16305a2c8cae2c1f9deec8a3f24c5031911320e0d8bbe"} Feb 16 23:07:19 crc kubenswrapper[4865]: I0216 23:07:19.567455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"811614ef-6229-489e-8da4-e1d4b1a5d5fd","Type":"ContainerStarted","Data":"48421978a9f6f858314880b92cf0ffb1d836bdcdd78f57d0654c10d0cfc905c8"} Feb 16 23:07:19 crc kubenswrapper[4865]: I0216 23:07:19.602987 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.602947753 podStartE2EDuration="2.602947753s" podCreationTimestamp="2026-02-16 23:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:07:19.589553172 +0000 UTC m=+1279.913260153" watchObservedRunningTime="2026-02-16 23:07:19.602947753 +0000 UTC m=+1279.926654764" Feb 16 23:07:20 crc kubenswrapper[4865]: I0216 23:07:20.886831 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 23:07:22 crc kubenswrapper[4865]: I0216 23:07:22.957246 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:07:22 crc kubenswrapper[4865]: I0216 23:07:22.958534 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 23:07:25 crc kubenswrapper[4865]: I0216 23:07:25.216142 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:07:25 crc kubenswrapper[4865]: I0216 23:07:25.216551 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 23:07:25 crc kubenswrapper[4865]: I0216 23:07:25.886707 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 23:07:25 crc kubenswrapper[4865]: I0216 23:07:25.943575 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 23:07:26 crc kubenswrapper[4865]: I0216 23:07:26.230531 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a4b1542-e38f-4ebf-9ca9-028ced41d506" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:07:26 crc kubenswrapper[4865]: I0216 23:07:26.230541 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5a4b1542-e38f-4ebf-9ca9-028ced41d506" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:07:26 crc kubenswrapper[4865]: I0216 23:07:26.692006 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 23:07:27 crc kubenswrapper[4865]: I0216 23:07:27.958257 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 23:07:27 crc kubenswrapper[4865]: I0216 23:07:27.959542 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 23:07:28 crc kubenswrapper[4865]: I0216 23:07:28.974427 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="811614ef-6229-489e-8da4-e1d4b1a5d5fd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:07:28 crc kubenswrapper[4865]: I0216 23:07:28.975861 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="811614ef-6229-489e-8da4-e1d4b1a5d5fd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 23:07:32 crc kubenswrapper[4865]: I0216 23:07:32.893336 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.240891 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.241806 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.243176 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.248800 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.760566 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 23:07:35 crc kubenswrapper[4865]: I0216 23:07:35.773504 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 23:07:37 crc kubenswrapper[4865]: I0216 23:07:37.984169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 23:07:37 crc kubenswrapper[4865]: I0216 23:07:37.994735 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 23:07:37 crc kubenswrapper[4865]: I0216 23:07:37.998323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 23:07:38 crc kubenswrapper[4865]: I0216 23:07:38.830935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 23:07:45 crc kubenswrapper[4865]: I0216 23:07:45.665413 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:07:45 crc kubenswrapper[4865]: I0216 23:07:45.668025 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:07:46 crc kubenswrapper[4865]: I0216 23:07:46.647236 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:07:48 crc kubenswrapper[4865]: I0216 23:07:48.511493 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:07:51 crc kubenswrapper[4865]: I0216 23:07:51.430535 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="rabbitmq" containerID="cri-o://f5d2ae589030f7114001a723de2dbeed973554698c74b500878309864eb70800" gracePeriod=604796 Feb 16 23:07:52 crc kubenswrapper[4865]: I0216 23:07:52.834274 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="rabbitmq" containerID="cri-o://413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6" gracePeriod=604796 Feb 16 23:07:55 crc kubenswrapper[4865]: I0216 23:07:55.953956 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 16 23:07:56 crc kubenswrapper[4865]: I0216 23:07:56.278734 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.037985 4865 generic.go:334] "Generic (PLEG): container finished" podID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerID="f5d2ae589030f7114001a723de2dbeed973554698c74b500878309864eb70800" exitCode=0 Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.038234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerDied","Data":"f5d2ae589030f7114001a723de2dbeed973554698c74b500878309864eb70800"} Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.149609 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.293582 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.293981 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294053 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294210 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294233 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294379 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294423 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.294487 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk2qj\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj\") pod \"17869fd2-4bd3-490c-be91-857d7cab1e73\" (UID: \"17869fd2-4bd3-490c-be91-857d7cab1e73\") " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.300066 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.300490 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.301550 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.303750 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.303863 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj" (OuterVolumeSpecName: "kube-api-access-nk2qj") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "kube-api-access-nk2qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.307265 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.315647 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info" (OuterVolumeSpecName: "pod-info") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.334448 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.385892 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data" (OuterVolumeSpecName: "config-data") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400012 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400047 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17869fd2-4bd3-490c-be91-857d7cab1e73-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400056 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400066 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400076 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400083 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400092 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400099 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17869fd2-4bd3-490c-be91-857d7cab1e73-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.400107 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk2qj\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-kube-api-access-nk2qj\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.413957 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf" (OuterVolumeSpecName: "server-conf") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.431979 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.464801 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "17869fd2-4bd3-490c-be91-857d7cab1e73" (UID: "17869fd2-4bd3-490c-be91-857d7cab1e73"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.501837 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17869fd2-4bd3-490c-be91-857d7cab1e73-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.501875 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:58 crc kubenswrapper[4865]: I0216 23:07:58.501886 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17869fd2-4bd3-490c-be91-857d7cab1e73-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.049208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"17869fd2-4bd3-490c-be91-857d7cab1e73","Type":"ContainerDied","Data":"bc3bb52d26458b911c9b9a25f144d0360822975086eba5b632e71cc8177cb0a0"} Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.049258 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.049531 4865 scope.go:117] "RemoveContainer" containerID="f5d2ae589030f7114001a723de2dbeed973554698c74b500878309864eb70800" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.118674 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.127758 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.151773 4865 scope.go:117] "RemoveContainer" containerID="ce23f91df3b645e3c619579ce46fec34711e865e6983a7989c7f9f247f88c0f1" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.167778 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:07:59 crc kubenswrapper[4865]: E0216 23:07:59.169318 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="setup-container" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.169339 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="setup-container" Feb 16 23:07:59 crc kubenswrapper[4865]: E0216 23:07:59.169359 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="rabbitmq" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.169367 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="rabbitmq" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.169823 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" containerName="rabbitmq" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.171530 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.176574 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.176714 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.176891 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pdtkv" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.176954 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.177271 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.178162 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.179115 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.188345 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.321413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.321498 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e2b8953-a55e-40c8-974f-a76a1352fbfb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.321707 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.321756 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwgw\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-kube-api-access-mtwgw\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322022 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322206 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e2b8953-a55e-40c8-974f-a76a1352fbfb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322247 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322325 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322412 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322463 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.322557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e2b8953-a55e-40c8-974f-a76a1352fbfb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424244 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424296 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424323 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424352 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424383 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424399 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e2b8953-a55e-40c8-974f-a76a1352fbfb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424437 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424457 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwgw\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-kube-api-access-mtwgw\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.424887 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.426680 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.427778 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.428555 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.429703 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-config-data\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.432102 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e2b8953-a55e-40c8-974f-a76a1352fbfb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.432545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e2b8953-a55e-40c8-974f-a76a1352fbfb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.432698 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.432922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e2b8953-a55e-40c8-974f-a76a1352fbfb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.451815 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.453758 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwgw\" (UniqueName: \"kubernetes.io/projected/2e2b8953-a55e-40c8-974f-a76a1352fbfb-kube-api-access-mtwgw\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.482422 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"2e2b8953-a55e-40c8-974f-a76a1352fbfb\") " pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.542044 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.545732 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.731807 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732083 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732123 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732231 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732325 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732349 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732390 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732436 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732509 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnx75\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.732541 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info\") pod \"9f530b91-ceff-467a-a146-60716412bbeb\" (UID: \"9f530b91-ceff-467a-a146-60716412bbeb\") " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.734348 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.734783 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.734799 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.737529 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.738546 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75" (OuterVolumeSpecName: "kube-api-access-qnx75") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "kube-api-access-qnx75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.738897 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.738984 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.740427 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info" (OuterVolumeSpecName: "pod-info") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.795651 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data" (OuterVolumeSpecName: "config-data") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.810192 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf" (OuterVolumeSpecName: "server-conf") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.835565 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-server-conf\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.835595 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836075 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836086 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f530b91-ceff-467a-a146-60716412bbeb-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836112 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836121 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836132 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836141 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f530b91-ceff-467a-a146-60716412bbeb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836195 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnx75\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-kube-api-access-qnx75\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.836205 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f530b91-ceff-467a-a146-60716412bbeb-pod-info\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.859681 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.877010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9f530b91-ceff-467a-a146-60716412bbeb" (UID: "9f530b91-ceff-467a-a146-60716412bbeb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.937846 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f530b91-ceff-467a-a146-60716412bbeb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 16 23:07:59 crc kubenswrapper[4865]: I0216 23:07:59.937889 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.068804 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.070617 4865 generic.go:334] "Generic (PLEG): container finished" podID="9f530b91-ceff-467a-a146-60716412bbeb" containerID="413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6" exitCode=0 Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.070699 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerDied","Data":"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6"} Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.070734 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f530b91-ceff-467a-a146-60716412bbeb","Type":"ContainerDied","Data":"5a0449eb74c6cb9422845d2024cb1f5acb588140c0ddaed9f254d0312183c927"} Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.070757 4865 scope.go:117] "RemoveContainer" containerID="413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.071553 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.117489 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.124578 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.130676 4865 scope.go:117] "RemoveContainer" containerID="8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.147781 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:08:00 crc kubenswrapper[4865]: E0216 23:08:00.148183 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="setup-container" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.148200 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="setup-container" Feb 16 23:08:00 crc kubenswrapper[4865]: E0216 23:08:00.148215 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="rabbitmq" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.148222 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="rabbitmq" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.148433 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f530b91-ceff-467a-a146-60716412bbeb" containerName="rabbitmq" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.149408 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152014 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152253 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fhvgp" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152409 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152407 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152553 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152615 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.152752 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.163357 4865 scope.go:117] "RemoveContainer" containerID="413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6" Feb 16 23:08:00 crc kubenswrapper[4865]: E0216 23:08:00.170608 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6\": container with ID starting with 413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6 not found: ID does not exist" containerID="413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.170641 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6"} err="failed to get container status \"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6\": rpc error: code = NotFound desc = could not find container \"413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6\": container with ID starting with 413dc52c8c576618c925ec5ea07b1236688bc25d7fa9b26feb155c851f33afc6 not found: ID does not exist" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.170663 4865 scope.go:117] "RemoveContainer" containerID="8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3" Feb 16 23:08:00 crc kubenswrapper[4865]: E0216 23:08:00.174861 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3\": container with ID starting with 8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3 not found: ID does not exist" containerID="8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.174889 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3"} err="failed to get container status \"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3\": rpc error: code = NotFound desc = could not find container \"8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3\": container with ID starting with 8963e92573acdbc904854f796c85e27fe20f916a5436ff813ca16b10b69afaa3 not found: ID does not exist" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.189350 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.244334 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.244737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.244894 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtcq\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-kube-api-access-5qtcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245134 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245363 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6912835d-d862-4295-9a6c-67deb30cbfba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6912835d-d862-4295-9a6c-67deb30cbfba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245715 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245761 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.245805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347491 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347599 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qtcq\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-kube-api-access-5qtcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347649 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6912835d-d862-4295-9a6c-67deb30cbfba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6912835d-d862-4295-9a6c-67deb30cbfba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347799 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.347819 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.348497 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.348764 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.349001 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.349361 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.349811 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.349979 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6912835d-d862-4295-9a6c-67deb30cbfba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.352032 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6912835d-d862-4295-9a6c-67deb30cbfba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.352157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6912835d-d862-4295-9a6c-67deb30cbfba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.354377 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.360257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.373999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qtcq\" (UniqueName: \"kubernetes.io/projected/6912835d-d862-4295-9a6c-67deb30cbfba-kube-api-access-5qtcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.377091 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6912835d-d862-4295-9a6c-67deb30cbfba\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.432831 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17869fd2-4bd3-490c-be91-857d7cab1e73" path="/var/lib/kubelet/pods/17869fd2-4bd3-490c-be91-857d7cab1e73/volumes" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.433654 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f530b91-ceff-467a-a146-60716412bbeb" path="/var/lib/kubelet/pods/9f530b91-ceff-467a-a146-60716412bbeb/volumes" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.478553 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fhvgp" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.486263 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:00 crc kubenswrapper[4865]: I0216 23:08:00.970187 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.085517 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6912835d-d862-4295-9a6c-67deb30cbfba","Type":"ContainerStarted","Data":"1950b409ae619a70bd1a8b0eaa821c08165bcc30ec0643670cc7eb4b103f1f4b"} Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.086238 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e2b8953-a55e-40c8-974f-a76a1352fbfb","Type":"ContainerStarted","Data":"4d8fd179a0d588e136cfa914ffda9dd85aa8d41f8524539bf8fffd09412aa003"} Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.240915 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.243130 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.245031 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.269307 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382181 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjxj\" (UniqueName: \"kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382234 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382319 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382340 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382401 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382638 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.382827 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.484948 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485039 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485154 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjxj\" (UniqueName: \"kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485185 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485230 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.485251 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.486410 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.486795 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.487700 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.488590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.489990 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.490626 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.514368 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjxj\" (UniqueName: \"kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj\") pod \"dnsmasq-dns-67b789f86c-j6npp\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:01 crc kubenswrapper[4865]: I0216 23:08:01.573018 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:02 crc kubenswrapper[4865]: I0216 23:08:02.108128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e2b8953-a55e-40c8-974f-a76a1352fbfb","Type":"ContainerStarted","Data":"98d6b3e40d470ecf450c689fac328dc901ffe95987326a7907db703c3d118b50"} Feb 16 23:08:02 crc kubenswrapper[4865]: I0216 23:08:02.108565 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:03 crc kubenswrapper[4865]: I0216 23:08:03.125233 4865 generic.go:334] "Generic (PLEG): container finished" podID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerID="982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3" exitCode=0 Feb 16 23:08:03 crc kubenswrapper[4865]: I0216 23:08:03.126012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" event={"ID":"5f43a6cf-9ef2-4def-bb69-0c30992583e1","Type":"ContainerDied","Data":"982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3"} Feb 16 23:08:03 crc kubenswrapper[4865]: I0216 23:08:03.126057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" event={"ID":"5f43a6cf-9ef2-4def-bb69-0c30992583e1","Type":"ContainerStarted","Data":"507ec5e0695f9dccd7adf1b2fb08c247561dcfe7ddb1996400059c426b801a9e"} Feb 16 23:08:03 crc kubenswrapper[4865]: I0216 23:08:03.131556 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6912835d-d862-4295-9a6c-67deb30cbfba","Type":"ContainerStarted","Data":"d939a0719a4ba0c6a262e2fb94f7faa11afde222c257e76c8e2d00efa38d6367"} Feb 16 23:08:04 crc kubenswrapper[4865]: I0216 23:08:04.144464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" event={"ID":"5f43a6cf-9ef2-4def-bb69-0c30992583e1","Type":"ContainerStarted","Data":"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9"} Feb 16 23:08:04 crc kubenswrapper[4865]: I0216 23:08:04.145364 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:04 crc kubenswrapper[4865]: I0216 23:08:04.179057 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" podStartSLOduration=3.179031749 podStartE2EDuration="3.179031749s" podCreationTimestamp="2026-02-16 23:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:08:04.176759404 +0000 UTC m=+1324.500466385" watchObservedRunningTime="2026-02-16 23:08:04.179031749 +0000 UTC m=+1324.502738740" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.575664 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.678927 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.679235 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="dnsmasq-dns" containerID="cri-o://fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66" gracePeriod=10 Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.723320 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.201:5353: connect: connection refused" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.861623 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-s47vk"] Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.864013 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.880262 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-s47vk"] Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950193 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950641 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svk6f\" (UniqueName: \"kubernetes.io/projected/e06861ae-60fd-47ad-8c55-82641a24d552-kube-api-access-svk6f\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950665 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-config\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950797 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.950858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:11 crc kubenswrapper[4865]: I0216 23:08:11.951090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.053008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.053440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.053568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svk6f\" (UniqueName: \"kubernetes.io/projected/e06861ae-60fd-47ad-8c55-82641a24d552-kube-api-access-svk6f\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.053593 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-config\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.054087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.054659 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.054970 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-config\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.054009 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.055011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.055048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.055340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.055731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.056292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e06861ae-60fd-47ad-8c55-82641a24d552-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.081225 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svk6f\" (UniqueName: \"kubernetes.io/projected/e06861ae-60fd-47ad-8c55-82641a24d552-kube-api-access-svk6f\") pod \"dnsmasq-dns-cb6ffcf87-s47vk\" (UID: \"e06861ae-60fd-47ad-8c55-82641a24d552\") " pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.236899 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.237254 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.283869 4865 generic.go:334] "Generic (PLEG): container finished" podID="c2f70611-ae1f-45d0-9688-7120ee736268" containerID="fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66" exitCode=0 Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.283915 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" event={"ID":"c2f70611-ae1f-45d0-9688-7120ee736268","Type":"ContainerDied","Data":"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66"} Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.283947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" event={"ID":"c2f70611-ae1f-45d0-9688-7120ee736268","Type":"ContainerDied","Data":"6b88b62914d488154a5042af390131fd22bbafe28ae552ec529e0865d0e22d11"} Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.283967 4865 scope.go:117] "RemoveContainer" containerID="fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.284140 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-82l9f" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.329655 4865 scope.go:117] "RemoveContainer" containerID="36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363063 4865 scope.go:117] "RemoveContainer" containerID="fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363265 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363370 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363425 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363495 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363793 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: E0216 23:08:12.363837 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66\": container with ID starting with fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66 not found: ID does not exist" containerID="fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.364241 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66"} err="failed to get container status \"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66\": rpc error: code = NotFound desc = could not find container \"fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66\": container with ID starting with fcd67f8a8266b188ad8a72e2419b0ede349217a3aed181d6a40e7d511dba5f66 not found: ID does not exist" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.364296 4865 scope.go:117] "RemoveContainer" containerID="36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.363940 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9n7f\" (UniqueName: \"kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f\") pod \"c2f70611-ae1f-45d0-9688-7120ee736268\" (UID: \"c2f70611-ae1f-45d0-9688-7120ee736268\") " Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.367557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f" (OuterVolumeSpecName: "kube-api-access-p9n7f") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "kube-api-access-p9n7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: E0216 23:08:12.368341 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0\": container with ID starting with 36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0 not found: ID does not exist" containerID="36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.368377 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0"} err="failed to get container status \"36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0\": rpc error: code = NotFound desc = could not find container \"36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0\": container with ID starting with 36e8e60f392e9ff3acf5bf5c0721187c95170fd4ff07e84bf3a07e1475dca9a0 not found: ID does not exist" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.426166 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.427900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.431896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config" (OuterVolumeSpecName: "config") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.446179 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.447927 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2f70611-ae1f-45d0-9688-7120ee736268" (UID: "c2f70611-ae1f-45d0-9688-7120ee736268"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467384 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9n7f\" (UniqueName: \"kubernetes.io/projected/c2f70611-ae1f-45d0-9688-7120ee736268-kube-api-access-p9n7f\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467424 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467434 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467445 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467453 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.467462 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2f70611-ae1f-45d0-9688-7120ee736268-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.634337 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.654476 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-82l9f"] Feb 16 23:08:12 crc kubenswrapper[4865]: I0216 23:08:12.734473 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-s47vk"] Feb 16 23:08:13 crc kubenswrapper[4865]: I0216 23:08:13.303563 4865 generic.go:334] "Generic (PLEG): container finished" podID="e06861ae-60fd-47ad-8c55-82641a24d552" containerID="1c97dab49a3755fe06939b86fc0060f8fbdb7032691e6f25a532f015dfbabb70" exitCode=0 Feb 16 23:08:13 crc kubenswrapper[4865]: I0216 23:08:13.303656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" event={"ID":"e06861ae-60fd-47ad-8c55-82641a24d552","Type":"ContainerDied","Data":"1c97dab49a3755fe06939b86fc0060f8fbdb7032691e6f25a532f015dfbabb70"} Feb 16 23:08:13 crc kubenswrapper[4865]: I0216 23:08:13.303889 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" event={"ID":"e06861ae-60fd-47ad-8c55-82641a24d552","Type":"ContainerStarted","Data":"61e4698246dd2acdfb8af15d97b8bcafbfa58602552814d0f065772de79eb7bd"} Feb 16 23:08:14 crc kubenswrapper[4865]: I0216 23:08:14.319772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" event={"ID":"e06861ae-60fd-47ad-8c55-82641a24d552","Type":"ContainerStarted","Data":"6494cbf87b13d5922408ea6d75df9231eb627ccfe005586827dde21e69d54ba1"} Feb 16 23:08:14 crc kubenswrapper[4865]: I0216 23:08:14.319943 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:14 crc kubenswrapper[4865]: I0216 23:08:14.350228 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" podStartSLOduration=3.3502069580000002 podStartE2EDuration="3.350206958s" podCreationTimestamp="2026-02-16 23:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:08:14.343021894 +0000 UTC m=+1334.666728875" watchObservedRunningTime="2026-02-16 23:08:14.350206958 +0000 UTC m=+1334.673913919" Feb 16 23:08:14 crc kubenswrapper[4865]: I0216 23:08:14.427078 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" path="/var/lib/kubelet/pods/c2f70611-ae1f-45d0-9688-7120ee736268/volumes" Feb 16 23:08:15 crc kubenswrapper[4865]: I0216 23:08:15.664662 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:08:15 crc kubenswrapper[4865]: I0216 23:08:15.665031 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:08:15 crc kubenswrapper[4865]: I0216 23:08:15.665098 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:08:15 crc kubenswrapper[4865]: I0216 23:08:15.666391 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:08:15 crc kubenswrapper[4865]: I0216 23:08:15.666483 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2" gracePeriod=600 Feb 16 23:08:16 crc kubenswrapper[4865]: I0216 23:08:16.347871 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2" exitCode=0 Feb 16 23:08:16 crc kubenswrapper[4865]: I0216 23:08:16.348016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2"} Feb 16 23:08:16 crc kubenswrapper[4865]: I0216 23:08:16.348262 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b"} Feb 16 23:08:16 crc kubenswrapper[4865]: I0216 23:08:16.348316 4865 scope.go:117] "RemoveContainer" containerID="235d0a0989c84c71f23d2f482cbde8cbac1989d3cd7dfef51dabc7d92db7c3f0" Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.238538 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-s47vk" Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.339238 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.339790 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="dnsmasq-dns" containerID="cri-o://cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9" gracePeriod=10 Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.878998 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943527 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftjxj\" (UniqueName: \"kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943573 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943634 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943693 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943723 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.943920 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0\") pod \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\" (UID: \"5f43a6cf-9ef2-4def-bb69-0c30992583e1\") " Feb 16 23:08:22 crc kubenswrapper[4865]: I0216 23:08:22.969748 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj" (OuterVolumeSpecName: "kube-api-access-ftjxj") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "kube-api-access-ftjxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.003212 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.003472 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.005901 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.015258 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config" (OuterVolumeSpecName: "config") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.029798 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.035010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f43a6cf-9ef2-4def-bb69-0c30992583e1" (UID: "5f43a6cf-9ef2-4def-bb69-0c30992583e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045773 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftjxj\" (UniqueName: \"kubernetes.io/projected/5f43a6cf-9ef2-4def-bb69-0c30992583e1-kube-api-access-ftjxj\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045811 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045823 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045832 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045842 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045850 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.045861 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f43a6cf-9ef2-4def-bb69-0c30992583e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.457955 4865 generic.go:334] "Generic (PLEG): container finished" podID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerID="cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9" exitCode=0 Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.458019 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.459467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" event={"ID":"5f43a6cf-9ef2-4def-bb69-0c30992583e1","Type":"ContainerDied","Data":"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9"} Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.459696 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-j6npp" event={"ID":"5f43a6cf-9ef2-4def-bb69-0c30992583e1","Type":"ContainerDied","Data":"507ec5e0695f9dccd7adf1b2fb08c247561dcfe7ddb1996400059c426b801a9e"} Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.459780 4865 scope.go:117] "RemoveContainer" containerID="cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.535991 4865 scope.go:117] "RemoveContainer" containerID="982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.540173 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.564405 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-j6npp"] Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.593145 4865 scope.go:117] "RemoveContainer" containerID="cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9" Feb 16 23:08:23 crc kubenswrapper[4865]: E0216 23:08:23.593554 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9\": container with ID starting with cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9 not found: ID does not exist" containerID="cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.593586 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9"} err="failed to get container status \"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9\": rpc error: code = NotFound desc = could not find container \"cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9\": container with ID starting with cd528c970dee429a64f8cbf7f9016c3abcf605f19a2f57bb79edad5c06a7a2c9 not found: ID does not exist" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.593609 4865 scope.go:117] "RemoveContainer" containerID="982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3" Feb 16 23:08:23 crc kubenswrapper[4865]: E0216 23:08:23.593976 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3\": container with ID starting with 982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3 not found: ID does not exist" containerID="982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3" Feb 16 23:08:23 crc kubenswrapper[4865]: I0216 23:08:23.593994 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3"} err="failed to get container status \"982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3\": rpc error: code = NotFound desc = could not find container \"982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3\": container with ID starting with 982a5b74d796c270abce5ecdee760b99493b68c46f49f6913e039c0af0b0afb3 not found: ID does not exist" Feb 16 23:08:24 crc kubenswrapper[4865]: I0216 23:08:24.431589 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" path="/var/lib/kubelet/pods/5f43a6cf-9ef2-4def-bb69-0c30992583e1/volumes" Feb 16 23:08:34 crc kubenswrapper[4865]: I0216 23:08:34.599610 4865 generic.go:334] "Generic (PLEG): container finished" podID="2e2b8953-a55e-40c8-974f-a76a1352fbfb" containerID="98d6b3e40d470ecf450c689fac328dc901ffe95987326a7907db703c3d118b50" exitCode=0 Feb 16 23:08:34 crc kubenswrapper[4865]: I0216 23:08:34.599710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e2b8953-a55e-40c8-974f-a76a1352fbfb","Type":"ContainerDied","Data":"98d6b3e40d470ecf450c689fac328dc901ffe95987326a7907db703c3d118b50"} Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.552996 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt"] Feb 16 23:08:35 crc kubenswrapper[4865]: E0216 23:08:35.554118 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554144 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: E0216 23:08:35.554170 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554179 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: E0216 23:08:35.554198 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="init" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554206 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="init" Feb 16 23:08:35 crc kubenswrapper[4865]: E0216 23:08:35.554216 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="init" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554224 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="init" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554496 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f43a6cf-9ef2-4def-bb69-0c30992583e1" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.554524 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f70611-ae1f-45d0-9688-7120ee736268" containerName="dnsmasq-dns" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.555349 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.557431 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.557904 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.561845 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.561906 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.565630 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt"] Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.623082 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2e2b8953-a55e-40c8-974f-a76a1352fbfb","Type":"ContainerStarted","Data":"781280d5ff24d08797cb6115b58dff14b8a6e7443ab2c57b9f7163fd43b3e505"} Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.623335 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.633057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.633266 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.633466 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.633518 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.673748 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.673717631 podStartE2EDuration="36.673717631s" podCreationTimestamp="2026-02-16 23:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:08:35.649895133 +0000 UTC m=+1355.973602104" watchObservedRunningTime="2026-02-16 23:08:35.673717631 +0000 UTC m=+1355.997424592" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.736732 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.737000 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.737601 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.737690 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.744197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.744364 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.745993 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.767884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:35 crc kubenswrapper[4865]: I0216 23:08:35.874164 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:36 crc kubenswrapper[4865]: I0216 23:08:36.434775 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt"] Feb 16 23:08:36 crc kubenswrapper[4865]: I0216 23:08:36.636139 4865 generic.go:334] "Generic (PLEG): container finished" podID="6912835d-d862-4295-9a6c-67deb30cbfba" containerID="d939a0719a4ba0c6a262e2fb94f7faa11afde222c257e76c8e2d00efa38d6367" exitCode=0 Feb 16 23:08:36 crc kubenswrapper[4865]: I0216 23:08:36.636226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6912835d-d862-4295-9a6c-67deb30cbfba","Type":"ContainerDied","Data":"d939a0719a4ba0c6a262e2fb94f7faa11afde222c257e76c8e2d00efa38d6367"} Feb 16 23:08:36 crc kubenswrapper[4865]: I0216 23:08:36.637986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" event={"ID":"1bbe0349-2def-4238-880b-5cd6ed9e0413","Type":"ContainerStarted","Data":"1aa3ee615c44164581cd98d5ece8dbf2738e6d8e69b10582b0d7ef24df791603"} Feb 16 23:08:37 crc kubenswrapper[4865]: I0216 23:08:37.654811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6912835d-d862-4295-9a6c-67deb30cbfba","Type":"ContainerStarted","Data":"9570cb0e38a042898dd055f05075c795fccf61d86fae4aa007cda7ef30f3d023"} Feb 16 23:08:37 crc kubenswrapper[4865]: I0216 23:08:37.655257 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:37 crc kubenswrapper[4865]: I0216 23:08:37.684505 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.684483846 podStartE2EDuration="37.684483846s" podCreationTimestamp="2026-02-16 23:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:08:37.682369636 +0000 UTC m=+1358.006076607" watchObservedRunningTime="2026-02-16 23:08:37.684483846 +0000 UTC m=+1358.008190817" Feb 16 23:08:46 crc kubenswrapper[4865]: I0216 23:08:46.795176 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" event={"ID":"1bbe0349-2def-4238-880b-5cd6ed9e0413","Type":"ContainerStarted","Data":"83a1ba397a530289b6e3fb9d074d75fb3ef54b532f8d9b668574411c69920bdf"} Feb 16 23:08:46 crc kubenswrapper[4865]: I0216 23:08:46.823871 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" podStartSLOduration=2.176006093 podStartE2EDuration="11.823824358s" podCreationTimestamp="2026-02-16 23:08:35 +0000 UTC" firstStartedPulling="2026-02-16 23:08:36.426751389 +0000 UTC m=+1356.750458360" lastFinishedPulling="2026-02-16 23:08:46.074569664 +0000 UTC m=+1366.398276625" observedRunningTime="2026-02-16 23:08:46.813591599 +0000 UTC m=+1367.137298590" watchObservedRunningTime="2026-02-16 23:08:46.823824358 +0000 UTC m=+1367.147531349" Feb 16 23:08:49 crc kubenswrapper[4865]: I0216 23:08:49.548132 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 23:08:50 crc kubenswrapper[4865]: I0216 23:08:50.490591 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 23:08:56 crc kubenswrapper[4865]: E0216 23:08:56.715070 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bbe0349_2def_4238_880b_5cd6ed9e0413.slice/crio-conmon-83a1ba397a530289b6e3fb9d074d75fb3ef54b532f8d9b668574411c69920bdf.scope\": RecentStats: unable to find data in memory cache]" Feb 16 23:08:56 crc kubenswrapper[4865]: I0216 23:08:56.900383 4865 generic.go:334] "Generic (PLEG): container finished" podID="1bbe0349-2def-4238-880b-5cd6ed9e0413" containerID="83a1ba397a530289b6e3fb9d074d75fb3ef54b532f8d9b668574411c69920bdf" exitCode=0 Feb 16 23:08:56 crc kubenswrapper[4865]: I0216 23:08:56.900456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" event={"ID":"1bbe0349-2def-4238-880b-5cd6ed9e0413","Type":"ContainerDied","Data":"83a1ba397a530289b6e3fb9d074d75fb3ef54b532f8d9b668574411c69920bdf"} Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.519202 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.648975 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8\") pod \"1bbe0349-2def-4238-880b-5cd6ed9e0413\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.649433 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam\") pod \"1bbe0349-2def-4238-880b-5cd6ed9e0413\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.649485 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory\") pod \"1bbe0349-2def-4238-880b-5cd6ed9e0413\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.649518 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle\") pod \"1bbe0349-2def-4238-880b-5cd6ed9e0413\" (UID: \"1bbe0349-2def-4238-880b-5cd6ed9e0413\") " Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.655818 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1bbe0349-2def-4238-880b-5cd6ed9e0413" (UID: "1bbe0349-2def-4238-880b-5cd6ed9e0413"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.656360 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8" (OuterVolumeSpecName: "kube-api-access-hwdm8") pod "1bbe0349-2def-4238-880b-5cd6ed9e0413" (UID: "1bbe0349-2def-4238-880b-5cd6ed9e0413"). InnerVolumeSpecName "kube-api-access-hwdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.695407 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory" (OuterVolumeSpecName: "inventory") pod "1bbe0349-2def-4238-880b-5cd6ed9e0413" (UID: "1bbe0349-2def-4238-880b-5cd6ed9e0413"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.697102 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1bbe0349-2def-4238-880b-5cd6ed9e0413" (UID: "1bbe0349-2def-4238-880b-5cd6ed9e0413"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.752403 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.752443 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.752455 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbe0349-2def-4238-880b-5cd6ed9e0413-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.752469 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/1bbe0349-2def-4238-880b-5cd6ed9e0413-kube-api-access-hwdm8\") on node \"crc\" DevicePath \"\"" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.921980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" event={"ID":"1bbe0349-2def-4238-880b-5cd6ed9e0413","Type":"ContainerDied","Data":"1aa3ee615c44164581cd98d5ece8dbf2738e6d8e69b10582b0d7ef24df791603"} Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.922021 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa3ee615c44164581cd98d5ece8dbf2738e6d8e69b10582b0d7ef24df791603" Feb 16 23:08:58 crc kubenswrapper[4865]: I0216 23:08:58.922033 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.054407 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq"] Feb 16 23:08:59 crc kubenswrapper[4865]: E0216 23:08:59.054821 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbe0349-2def-4238-880b-5cd6ed9e0413" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.054844 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbe0349-2def-4238-880b-5cd6ed9e0413" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.055008 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbe0349-2def-4238-880b-5cd6ed9e0413" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.063829 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.066665 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.068582 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.068771 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.069358 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.076495 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq"] Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.161788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqzd\" (UniqueName: \"kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.161966 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.161991 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.264229 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.264285 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.264346 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqzd\" (UniqueName: \"kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.268949 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.276913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.283021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqzd\" (UniqueName: \"kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-6crtq\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.381626 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:08:59 crc kubenswrapper[4865]: W0216 23:08:59.926113 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82582c93_5f30_417e_a5f1_62038c6f8000.slice/crio-c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246 WatchSource:0}: Error finding container c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246: Status 404 returned error can't find the container with id c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246 Feb 16 23:08:59 crc kubenswrapper[4865]: I0216 23:08:59.934680 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq"] Feb 16 23:09:00 crc kubenswrapper[4865]: I0216 23:09:00.380502 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:09:00 crc kubenswrapper[4865]: I0216 23:09:00.950382 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" event={"ID":"82582c93-5f30-417e-a5f1-62038c6f8000","Type":"ContainerStarted","Data":"2e68c8eb688d62a78fecbdc7de1e5f5dbc0a937603ee59a7043a623f3d45e4b7"} Feb 16 23:09:00 crc kubenswrapper[4865]: I0216 23:09:00.950751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" event={"ID":"82582c93-5f30-417e-a5f1-62038c6f8000","Type":"ContainerStarted","Data":"c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246"} Feb 16 23:09:00 crc kubenswrapper[4865]: I0216 23:09:00.992992 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" podStartSLOduration=1.548834264 podStartE2EDuration="1.992968254s" podCreationTimestamp="2026-02-16 23:08:59 +0000 UTC" firstStartedPulling="2026-02-16 23:08:59.932761774 +0000 UTC m=+1380.256468785" lastFinishedPulling="2026-02-16 23:09:00.376895804 +0000 UTC m=+1380.700602775" observedRunningTime="2026-02-16 23:09:00.985859833 +0000 UTC m=+1381.309566804" watchObservedRunningTime="2026-02-16 23:09:00.992968254 +0000 UTC m=+1381.316675215" Feb 16 23:09:03 crc kubenswrapper[4865]: I0216 23:09:03.991711 4865 generic.go:334] "Generic (PLEG): container finished" podID="82582c93-5f30-417e-a5f1-62038c6f8000" containerID="2e68c8eb688d62a78fecbdc7de1e5f5dbc0a937603ee59a7043a623f3d45e4b7" exitCode=0 Feb 16 23:09:03 crc kubenswrapper[4865]: I0216 23:09:03.991834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" event={"ID":"82582c93-5f30-417e-a5f1-62038c6f8000","Type":"ContainerDied","Data":"2e68c8eb688d62a78fecbdc7de1e5f5dbc0a937603ee59a7043a623f3d45e4b7"} Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.507528 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.688018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam\") pod \"82582c93-5f30-417e-a5f1-62038c6f8000\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.688128 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqzd\" (UniqueName: \"kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd\") pod \"82582c93-5f30-417e-a5f1-62038c6f8000\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.688225 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory\") pod \"82582c93-5f30-417e-a5f1-62038c6f8000\" (UID: \"82582c93-5f30-417e-a5f1-62038c6f8000\") " Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.698507 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd" (OuterVolumeSpecName: "kube-api-access-9dqzd") pod "82582c93-5f30-417e-a5f1-62038c6f8000" (UID: "82582c93-5f30-417e-a5f1-62038c6f8000"). InnerVolumeSpecName "kube-api-access-9dqzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.718116 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory" (OuterVolumeSpecName: "inventory") pod "82582c93-5f30-417e-a5f1-62038c6f8000" (UID: "82582c93-5f30-417e-a5f1-62038c6f8000"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.730252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "82582c93-5f30-417e-a5f1-62038c6f8000" (UID: "82582c93-5f30-417e-a5f1-62038c6f8000"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.790794 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.790832 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqzd\" (UniqueName: \"kubernetes.io/projected/82582c93-5f30-417e-a5f1-62038c6f8000-kube-api-access-9dqzd\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:05 crc kubenswrapper[4865]: I0216 23:09:05.790845 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82582c93-5f30-417e-a5f1-62038c6f8000-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.015721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" event={"ID":"82582c93-5f30-417e-a5f1-62038c6f8000","Type":"ContainerDied","Data":"c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246"} Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.015769 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c47cfa69ed9e72c82d7546f23ec8e6877d82500950cdd84493052868236246" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.015765 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-6crtq" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.105316 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z"] Feb 16 23:09:06 crc kubenswrapper[4865]: E0216 23:09:06.105827 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82582c93-5f30-417e-a5f1-62038c6f8000" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.105852 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82582c93-5f30-417e-a5f1-62038c6f8000" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.106097 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="82582c93-5f30-417e-a5f1-62038c6f8000" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.107126 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.109328 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.109368 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.109350 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.111051 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.122337 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z"] Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.301511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.301935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.302036 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.302147 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zbk\" (UniqueName: \"kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.403863 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.404367 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.405199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.405617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zbk\" (UniqueName: \"kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.411417 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.411482 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.411940 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.445765 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zbk\" (UniqueName: \"kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-js45z\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:06 crc kubenswrapper[4865]: I0216 23:09:06.722934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:09:07 crc kubenswrapper[4865]: I0216 23:09:07.413220 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z"] Feb 16 23:09:08 crc kubenswrapper[4865]: I0216 23:09:08.038092 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" event={"ID":"24da9b19-2d45-4f18-a79e-bf378e4ee44d","Type":"ContainerStarted","Data":"fb7f5aeb730e7d474b18e6e5f9c972b33800973f580ead0380937cf57858772d"} Feb 16 23:09:09 crc kubenswrapper[4865]: I0216 23:09:09.058133 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" event={"ID":"24da9b19-2d45-4f18-a79e-bf378e4ee44d","Type":"ContainerStarted","Data":"04c6f7be7c4ff55f39542591e1cf3186a9f8a174f6d2272126287fb8bd60101b"} Feb 16 23:09:09 crc kubenswrapper[4865]: I0216 23:09:09.090722 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" podStartSLOduration=2.641252772 podStartE2EDuration="3.090689953s" podCreationTimestamp="2026-02-16 23:09:06 +0000 UTC" firstStartedPulling="2026-02-16 23:09:07.429450607 +0000 UTC m=+1387.753157578" lastFinishedPulling="2026-02-16 23:09:07.878887768 +0000 UTC m=+1388.202594759" observedRunningTime="2026-02-16 23:09:09.087318908 +0000 UTC m=+1389.411025879" watchObservedRunningTime="2026-02-16 23:09:09.090689953 +0000 UTC m=+1389.414396964" Feb 16 23:10:02 crc kubenswrapper[4865]: I0216 23:10:02.879131 4865 scope.go:117] "RemoveContainer" containerID="a592aa0cf0c71c42c2b4d3333825dd8933b2029c24fad790c4368d99f18627b7" Feb 16 23:10:02 crc kubenswrapper[4865]: I0216 23:10:02.917303 4865 scope.go:117] "RemoveContainer" containerID="8c65117d8ae27e4e3bee9b5d12360857b6b599279c1359b964481e9cfbb96d61" Feb 16 23:10:02 crc kubenswrapper[4865]: I0216 23:10:02.964771 4865 scope.go:117] "RemoveContainer" containerID="bdc0f90485d9a58e592b824d00e990259d3bcdc4f55d2a284a090556ce125509" Feb 16 23:10:03 crc kubenswrapper[4865]: I0216 23:10:03.010308 4865 scope.go:117] "RemoveContainer" containerID="2f12214fa3dbb0a0eb40c49f66e2714d74b0a8cc82e85fb91dc638a5e51aec89" Feb 16 23:10:03 crc kubenswrapper[4865]: I0216 23:10:03.066192 4865 scope.go:117] "RemoveContainer" containerID="723cf77b5d28b547b58bce73488a13ab6681ba903e51af345e10635b9efe79c0" Feb 16 23:10:03 crc kubenswrapper[4865]: I0216 23:10:03.099504 4865 scope.go:117] "RemoveContainer" containerID="e9f09b0b3de8fdb0ab95a86c9f603cfea5beaa2631946168091dc94774f6c73b" Feb 16 23:10:20 crc kubenswrapper[4865]: I0216 23:10:20.968520 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:20 crc kubenswrapper[4865]: I0216 23:10:20.972979 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:20 crc kubenswrapper[4865]: I0216 23:10:20.981207 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.120527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.120616 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x57jr\" (UniqueName: \"kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.120692 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.222645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.222827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.222871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x57jr\" (UniqueName: \"kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.223567 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.223589 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.247974 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x57jr\" (UniqueName: \"kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr\") pod \"community-operators-r8ffm\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.297059 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.713646 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:21 crc kubenswrapper[4865]: I0216 23:10:21.927355 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerStarted","Data":"151f13dd9e89cfae08c1cd84e2bc2fa262c36c0c9eae88bb928af7af19d28340"} Feb 16 23:10:22 crc kubenswrapper[4865]: I0216 23:10:22.938187 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerID="6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364" exitCode=0 Feb 16 23:10:22 crc kubenswrapper[4865]: I0216 23:10:22.938318 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerDied","Data":"6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364"} Feb 16 23:10:23 crc kubenswrapper[4865]: I0216 23:10:23.953766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerStarted","Data":"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15"} Feb 16 23:10:24 crc kubenswrapper[4865]: I0216 23:10:24.970614 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerID="f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15" exitCode=0 Feb 16 23:10:24 crc kubenswrapper[4865]: I0216 23:10:24.970755 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerDied","Data":"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15"} Feb 16 23:10:25 crc kubenswrapper[4865]: I0216 23:10:25.988488 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerStarted","Data":"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de"} Feb 16 23:10:26 crc kubenswrapper[4865]: I0216 23:10:26.021597 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r8ffm" podStartSLOduration=3.577131328 podStartE2EDuration="6.021563428s" podCreationTimestamp="2026-02-16 23:10:20 +0000 UTC" firstStartedPulling="2026-02-16 23:10:22.941392143 +0000 UTC m=+1463.265099124" lastFinishedPulling="2026-02-16 23:10:25.385824223 +0000 UTC m=+1465.709531224" observedRunningTime="2026-02-16 23:10:26.012175942 +0000 UTC m=+1466.335882943" watchObservedRunningTime="2026-02-16 23:10:26.021563428 +0000 UTC m=+1466.345270419" Feb 16 23:10:31 crc kubenswrapper[4865]: I0216 23:10:31.302750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:31 crc kubenswrapper[4865]: I0216 23:10:31.303368 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:31 crc kubenswrapper[4865]: I0216 23:10:31.372783 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:32 crc kubenswrapper[4865]: I0216 23:10:32.104723 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:32 crc kubenswrapper[4865]: I0216 23:10:32.153584 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.072819 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r8ffm" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="registry-server" containerID="cri-o://36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de" gracePeriod=2 Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.612270 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.644452 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities\") pod \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.644664 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x57jr\" (UniqueName: \"kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr\") pod \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.644703 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content\") pod \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\" (UID: \"ecf2cbed-2083-4035-a6a4-ab3ef2a91114\") " Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.649081 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities" (OuterVolumeSpecName: "utilities") pod "ecf2cbed-2083-4035-a6a4-ab3ef2a91114" (UID: "ecf2cbed-2083-4035-a6a4-ab3ef2a91114"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.663124 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr" (OuterVolumeSpecName: "kube-api-access-x57jr") pod "ecf2cbed-2083-4035-a6a4-ab3ef2a91114" (UID: "ecf2cbed-2083-4035-a6a4-ab3ef2a91114"). InnerVolumeSpecName "kube-api-access-x57jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.710501 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecf2cbed-2083-4035-a6a4-ab3ef2a91114" (UID: "ecf2cbed-2083-4035-a6a4-ab3ef2a91114"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.748825 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.748881 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x57jr\" (UniqueName: \"kubernetes.io/projected/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-kube-api-access-x57jr\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:34 crc kubenswrapper[4865]: I0216 23:10:34.748902 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf2cbed-2083-4035-a6a4-ab3ef2a91114-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.087232 4865 generic.go:334] "Generic (PLEG): container finished" podID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerID="36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de" exitCode=0 Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.087346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerDied","Data":"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de"} Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.087365 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r8ffm" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.087405 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r8ffm" event={"ID":"ecf2cbed-2083-4035-a6a4-ab3ef2a91114","Type":"ContainerDied","Data":"151f13dd9e89cfae08c1cd84e2bc2fa262c36c0c9eae88bb928af7af19d28340"} Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.087443 4865 scope.go:117] "RemoveContainer" containerID="36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.134745 4865 scope.go:117] "RemoveContainer" containerID="f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.150158 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.165240 4865 scope.go:117] "RemoveContainer" containerID="6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.167512 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r8ffm"] Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.224189 4865 scope.go:117] "RemoveContainer" containerID="36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de" Feb 16 23:10:35 crc kubenswrapper[4865]: E0216 23:10:35.224775 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de\": container with ID starting with 36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de not found: ID does not exist" containerID="36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.224862 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de"} err="failed to get container status \"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de\": rpc error: code = NotFound desc = could not find container \"36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de\": container with ID starting with 36ccbcfab803c43cd2fb720110f9aec043366be5ad8d0a2ddff1a391b9b773de not found: ID does not exist" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.224932 4865 scope.go:117] "RemoveContainer" containerID="f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15" Feb 16 23:10:35 crc kubenswrapper[4865]: E0216 23:10:35.225496 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15\": container with ID starting with f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15 not found: ID does not exist" containerID="f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.225527 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15"} err="failed to get container status \"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15\": rpc error: code = NotFound desc = could not find container \"f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15\": container with ID starting with f6227b2c41083582360866f2b2748439f1cce9e51c4e71c95e05daeb61bb2e15 not found: ID does not exist" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.225547 4865 scope.go:117] "RemoveContainer" containerID="6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364" Feb 16 23:10:35 crc kubenswrapper[4865]: E0216 23:10:35.226088 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364\": container with ID starting with 6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364 not found: ID does not exist" containerID="6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364" Feb 16 23:10:35 crc kubenswrapper[4865]: I0216 23:10:35.226161 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364"} err="failed to get container status \"6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364\": rpc error: code = NotFound desc = could not find container \"6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364\": container with ID starting with 6599879ceabdbbcad32efe09c3bf5a10122a39fb16259f5713048a7d7854b364 not found: ID does not exist" Feb 16 23:10:36 crc kubenswrapper[4865]: I0216 23:10:36.434419 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" path="/var/lib/kubelet/pods/ecf2cbed-2083-4035-a6a4-ab3ef2a91114/volumes" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.282465 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:42 crc kubenswrapper[4865]: E0216 23:10:42.284252 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="extract-content" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.284302 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="extract-content" Feb 16 23:10:42 crc kubenswrapper[4865]: E0216 23:10:42.284345 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="extract-utilities" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.284360 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="extract-utilities" Feb 16 23:10:42 crc kubenswrapper[4865]: E0216 23:10:42.284397 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="registry-server" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.284411 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="registry-server" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.285100 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf2cbed-2083-4035-a6a4-ab3ef2a91114" containerName="registry-server" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.289339 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.311203 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.331100 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4llr\" (UniqueName: \"kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.331189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.331228 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.433709 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4llr\" (UniqueName: \"kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.433795 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.433833 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.434553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.434580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.461652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4llr\" (UniqueName: \"kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr\") pod \"redhat-marketplace-mssn9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:42 crc kubenswrapper[4865]: I0216 23:10:42.617040 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:43 crc kubenswrapper[4865]: I0216 23:10:43.151671 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:43 crc kubenswrapper[4865]: I0216 23:10:43.190908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerStarted","Data":"8071fbbcae5877ec44b23f2847735d0602cc4a61f365aee5f14b2c3a14574257"} Feb 16 23:10:44 crc kubenswrapper[4865]: I0216 23:10:44.205015 4865 generic.go:334] "Generic (PLEG): container finished" podID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerID="805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d" exitCode=0 Feb 16 23:10:44 crc kubenswrapper[4865]: I0216 23:10:44.205268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerDied","Data":"805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d"} Feb 16 23:10:45 crc kubenswrapper[4865]: I0216 23:10:45.222693 4865 generic.go:334] "Generic (PLEG): container finished" podID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerID="81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931" exitCode=0 Feb 16 23:10:45 crc kubenswrapper[4865]: I0216 23:10:45.222774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerDied","Data":"81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931"} Feb 16 23:10:45 crc kubenswrapper[4865]: I0216 23:10:45.664427 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:10:45 crc kubenswrapper[4865]: I0216 23:10:45.664501 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:10:46 crc kubenswrapper[4865]: I0216 23:10:46.247252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerStarted","Data":"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49"} Feb 16 23:10:46 crc kubenswrapper[4865]: I0216 23:10:46.277833 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mssn9" podStartSLOduration=2.848307926 podStartE2EDuration="4.277811073s" podCreationTimestamp="2026-02-16 23:10:42 +0000 UTC" firstStartedPulling="2026-02-16 23:10:44.209223775 +0000 UTC m=+1484.532930776" lastFinishedPulling="2026-02-16 23:10:45.638726922 +0000 UTC m=+1485.962433923" observedRunningTime="2026-02-16 23:10:46.275531089 +0000 UTC m=+1486.599238120" watchObservedRunningTime="2026-02-16 23:10:46.277811073 +0000 UTC m=+1486.601518034" Feb 16 23:10:52 crc kubenswrapper[4865]: I0216 23:10:52.617326 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:52 crc kubenswrapper[4865]: I0216 23:10:52.618034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:52 crc kubenswrapper[4865]: I0216 23:10:52.672549 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:53 crc kubenswrapper[4865]: I0216 23:10:53.399236 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:53 crc kubenswrapper[4865]: I0216 23:10:53.467237 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.348689 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mssn9" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="registry-server" containerID="cri-o://0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49" gracePeriod=2 Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.855983 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.939210 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content\") pod \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.939297 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4llr\" (UniqueName: \"kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr\") pod \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.939528 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities\") pod \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\" (UID: \"a830bc39-064f-4a96-a6ee-722d32dd8ed9\") " Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.941200 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities" (OuterVolumeSpecName: "utilities") pod "a830bc39-064f-4a96-a6ee-722d32dd8ed9" (UID: "a830bc39-064f-4a96-a6ee-722d32dd8ed9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.945984 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr" (OuterVolumeSpecName: "kube-api-access-j4llr") pod "a830bc39-064f-4a96-a6ee-722d32dd8ed9" (UID: "a830bc39-064f-4a96-a6ee-722d32dd8ed9"). InnerVolumeSpecName "kube-api-access-j4llr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:10:55 crc kubenswrapper[4865]: I0216 23:10:55.966515 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a830bc39-064f-4a96-a6ee-722d32dd8ed9" (UID: "a830bc39-064f-4a96-a6ee-722d32dd8ed9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.041879 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.041915 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a830bc39-064f-4a96-a6ee-722d32dd8ed9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.041932 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4llr\" (UniqueName: \"kubernetes.io/projected/a830bc39-064f-4a96-a6ee-722d32dd8ed9-kube-api-access-j4llr\") on node \"crc\" DevicePath \"\"" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.366561 4865 generic.go:334] "Generic (PLEG): container finished" podID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerID="0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49" exitCode=0 Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.366627 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerDied","Data":"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49"} Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.366663 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mssn9" event={"ID":"a830bc39-064f-4a96-a6ee-722d32dd8ed9","Type":"ContainerDied","Data":"8071fbbcae5877ec44b23f2847735d0602cc4a61f365aee5f14b2c3a14574257"} Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.366677 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mssn9" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.366687 4865 scope.go:117] "RemoveContainer" containerID="0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.393084 4865 scope.go:117] "RemoveContainer" containerID="81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.444643 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.445501 4865 scope.go:117] "RemoveContainer" containerID="805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.459378 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mssn9"] Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.500785 4865 scope.go:117] "RemoveContainer" containerID="0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49" Feb 16 23:10:56 crc kubenswrapper[4865]: E0216 23:10:56.502105 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49\": container with ID starting with 0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49 not found: ID does not exist" containerID="0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.502155 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49"} err="failed to get container status \"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49\": rpc error: code = NotFound desc = could not find container \"0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49\": container with ID starting with 0021ad5485151a52a59170f9e830ba4e2a6350c6727517b5be760350c7775e49 not found: ID does not exist" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.502179 4865 scope.go:117] "RemoveContainer" containerID="81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931" Feb 16 23:10:56 crc kubenswrapper[4865]: E0216 23:10:56.502945 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931\": container with ID starting with 81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931 not found: ID does not exist" containerID="81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.502969 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931"} err="failed to get container status \"81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931\": rpc error: code = NotFound desc = could not find container \"81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931\": container with ID starting with 81ecac6c9d8354e6eb7b230a6c46a925002642178ac9aec6c61d853b8e1db931 not found: ID does not exist" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.504168 4865 scope.go:117] "RemoveContainer" containerID="805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d" Feb 16 23:10:56 crc kubenswrapper[4865]: E0216 23:10:56.504504 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d\": container with ID starting with 805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d not found: ID does not exist" containerID="805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d" Feb 16 23:10:56 crc kubenswrapper[4865]: I0216 23:10:56.504542 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d"} err="failed to get container status \"805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d\": rpc error: code = NotFound desc = could not find container \"805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d\": container with ID starting with 805229bdee0eb91672ec8ebc087e5bf0d8a4bb3cc6971d38e9084c5a9d69de3d not found: ID does not exist" Feb 16 23:10:58 crc kubenswrapper[4865]: I0216 23:10:58.425571 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" path="/var/lib/kubelet/pods/a830bc39-064f-4a96-a6ee-722d32dd8ed9/volumes" Feb 16 23:11:03 crc kubenswrapper[4865]: I0216 23:11:03.210140 4865 scope.go:117] "RemoveContainer" containerID="2814a696863ed65e87220948af64bcc32663881cdf6e71652298aea112be109e" Feb 16 23:11:03 crc kubenswrapper[4865]: I0216 23:11:03.244960 4865 scope.go:117] "RemoveContainer" containerID="dcfde899eb7c86d6c47684aa3e3841f201dc92ccc23dbc31377bd673066f16a0" Feb 16 23:11:03 crc kubenswrapper[4865]: I0216 23:11:03.310706 4865 scope.go:117] "RemoveContainer" containerID="7144389fd30208655c18a14f80449c10bc0a17723877ead9264f1674d6b56851" Feb 16 23:11:03 crc kubenswrapper[4865]: I0216 23:11:03.349332 4865 scope.go:117] "RemoveContainer" containerID="5d9059253f4544b49c6fb0f695b380e1cce4e73273b4bf3d2eb40e7d7c17dbce" Feb 16 23:11:03 crc kubenswrapper[4865]: I0216 23:11:03.396853 4865 scope.go:117] "RemoveContainer" containerID="2e6e93dc6167c7be44fdb8310b68b3451d0044e07b5d4fc87aa189a5bf306c48" Feb 16 23:11:15 crc kubenswrapper[4865]: I0216 23:11:15.664879 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:11:15 crc kubenswrapper[4865]: I0216 23:11:15.665644 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.664742 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.665212 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.665251 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.665971 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.666022 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" gracePeriod=600 Feb 16 23:11:45 crc kubenswrapper[4865]: E0216 23:11:45.798922 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.974418 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" exitCode=0 Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.974490 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b"} Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.974527 4865 scope.go:117] "RemoveContainer" containerID="32daf57e4fb7661dfc4ca72f088e0b8d88b3c260d4b2b6cc44cc118921a811c2" Feb 16 23:11:45 crc kubenswrapper[4865]: I0216 23:11:45.975686 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:11:45 crc kubenswrapper[4865]: E0216 23:11:45.976363 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:11:59 crc kubenswrapper[4865]: I0216 23:11:59.135146 4865 generic.go:334] "Generic (PLEG): container finished" podID="24da9b19-2d45-4f18-a79e-bf378e4ee44d" containerID="04c6f7be7c4ff55f39542591e1cf3186a9f8a174f6d2272126287fb8bd60101b" exitCode=0 Feb 16 23:11:59 crc kubenswrapper[4865]: I0216 23:11:59.135518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" event={"ID":"24da9b19-2d45-4f18-a79e-bf378e4ee44d","Type":"ContainerDied","Data":"04c6f7be7c4ff55f39542591e1cf3186a9f8a174f6d2272126287fb8bd60101b"} Feb 16 23:11:59 crc kubenswrapper[4865]: I0216 23:11:59.414603 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:11:59 crc kubenswrapper[4865]: E0216 23:11:59.414987 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.699646 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.880067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zbk\" (UniqueName: \"kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk\") pod \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.880153 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle\") pod \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.880226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory\") pod \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.880371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam\") pod \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\" (UID: \"24da9b19-2d45-4f18-a79e-bf378e4ee44d\") " Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.886071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk" (OuterVolumeSpecName: "kube-api-access-s4zbk") pod "24da9b19-2d45-4f18-a79e-bf378e4ee44d" (UID: "24da9b19-2d45-4f18-a79e-bf378e4ee44d"). InnerVolumeSpecName "kube-api-access-s4zbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.896551 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "24da9b19-2d45-4f18-a79e-bf378e4ee44d" (UID: "24da9b19-2d45-4f18-a79e-bf378e4ee44d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.908713 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24da9b19-2d45-4f18-a79e-bf378e4ee44d" (UID: "24da9b19-2d45-4f18-a79e-bf378e4ee44d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.909191 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory" (OuterVolumeSpecName: "inventory") pod "24da9b19-2d45-4f18-a79e-bf378e4ee44d" (UID: "24da9b19-2d45-4f18-a79e-bf378e4ee44d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.983204 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.983250 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.983263 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24da9b19-2d45-4f18-a79e-bf378e4ee44d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:12:00 crc kubenswrapper[4865]: I0216 23:12:00.983322 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zbk\" (UniqueName: \"kubernetes.io/projected/24da9b19-2d45-4f18-a79e-bf378e4ee44d-kube-api-access-s4zbk\") on node \"crc\" DevicePath \"\"" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.154549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" event={"ID":"24da9b19-2d45-4f18-a79e-bf378e4ee44d","Type":"ContainerDied","Data":"fb7f5aeb730e7d474b18e6e5f9c972b33800973f580ead0380937cf57858772d"} Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.154596 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7f5aeb730e7d474b18e6e5f9c972b33800973f580ead0380937cf57858772d" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.154615 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-js45z" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.286153 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd"] Feb 16 23:12:01 crc kubenswrapper[4865]: E0216 23:12:01.287184 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="registry-server" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287233 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="registry-server" Feb 16 23:12:01 crc kubenswrapper[4865]: E0216 23:12:01.287368 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="extract-content" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287381 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="extract-content" Feb 16 23:12:01 crc kubenswrapper[4865]: E0216 23:12:01.287438 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24da9b19-2d45-4f18-a79e-bf378e4ee44d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287448 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="24da9b19-2d45-4f18-a79e-bf378e4ee44d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 23:12:01 crc kubenswrapper[4865]: E0216 23:12:01.287462 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="extract-utilities" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287472 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="extract-utilities" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287845 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="24da9b19-2d45-4f18-a79e-bf378e4ee44d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.287868 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a830bc39-064f-4a96-a6ee-722d32dd8ed9" containerName="registry-server" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.289216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.291542 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.291825 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.292042 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.294202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.297917 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd"] Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.389806 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f588g\" (UniqueName: \"kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.390457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.390746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.492642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.493174 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.493426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f588g\" (UniqueName: \"kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.499520 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.515566 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.521115 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f588g\" (UniqueName: \"kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:01 crc kubenswrapper[4865]: I0216 23:12:01.615577 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:12:02 crc kubenswrapper[4865]: I0216 23:12:02.216038 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd"] Feb 16 23:12:03 crc kubenswrapper[4865]: I0216 23:12:03.176369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" event={"ID":"45600784-63ad-4273-ab6d-5732fc0988e6","Type":"ContainerStarted","Data":"65903d26af565bcee2fca7d78c77eb02f6b6984cec87efb67e1eab1f38fa5fdc"} Feb 16 23:12:03 crc kubenswrapper[4865]: I0216 23:12:03.177177 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" event={"ID":"45600784-63ad-4273-ab6d-5732fc0988e6","Type":"ContainerStarted","Data":"28be2c10c47bcbf1f95040196612faef987294cb4dd966659da35c451d47ec10"} Feb 16 23:12:03 crc kubenswrapper[4865]: I0216 23:12:03.204998 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" podStartSLOduration=1.779212139 podStartE2EDuration="2.204968035s" podCreationTimestamp="2026-02-16 23:12:01 +0000 UTC" firstStartedPulling="2026-02-16 23:12:02.213368654 +0000 UTC m=+1562.537075655" lastFinishedPulling="2026-02-16 23:12:02.63912459 +0000 UTC m=+1562.962831551" observedRunningTime="2026-02-16 23:12:03.200686805 +0000 UTC m=+1563.524393806" watchObservedRunningTime="2026-02-16 23:12:03.204968035 +0000 UTC m=+1563.528675036" Feb 16 23:12:11 crc kubenswrapper[4865]: I0216 23:12:11.414649 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:12:11 crc kubenswrapper[4865]: E0216 23:12:11.415652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:12:23 crc kubenswrapper[4865]: I0216 23:12:23.416076 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:12:23 crc kubenswrapper[4865]: E0216 23:12:23.417424 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:12:37 crc kubenswrapper[4865]: I0216 23:12:37.415756 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:12:37 crc kubenswrapper[4865]: E0216 23:12:37.416820 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:12:51 crc kubenswrapper[4865]: I0216 23:12:51.414656 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:12:51 crc kubenswrapper[4865]: E0216 23:12:51.415368 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:02 crc kubenswrapper[4865]: I0216 23:13:02.417939 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:13:02 crc kubenswrapper[4865]: E0216 23:13:02.419199 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.043573 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bvlkv"] Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.052393 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c32f-account-create-update-rf5cg"] Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.060520 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c32f-account-create-update-rf5cg"] Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.068403 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bvlkv"] Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.426088 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4a6413-5a03-450d-8bb2-abf70fdead46" path="/var/lib/kubelet/pods/0e4a6413-5a03-450d-8bb2-abf70fdead46/volumes" Feb 16 23:13:08 crc kubenswrapper[4865]: I0216 23:13:08.426796 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effc5df0-2f7c-4d99-bf39-db1aa0c22c24" path="/var/lib/kubelet/pods/effc5df0-2f7c-4d99-bf39-db1aa0c22c24/volumes" Feb 16 23:13:13 crc kubenswrapper[4865]: I0216 23:13:13.034190 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mzw96"] Feb 16 23:13:13 crc kubenswrapper[4865]: I0216 23:13:13.041745 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mzw96"] Feb 16 23:13:13 crc kubenswrapper[4865]: I0216 23:13:13.415639 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:13:13 crc kubenswrapper[4865]: E0216 23:13:13.415954 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:14 crc kubenswrapper[4865]: I0216 23:13:14.031466 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-82d6-account-create-update-2499b"] Feb 16 23:13:14 crc kubenswrapper[4865]: I0216 23:13:14.041082 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-82d6-account-create-update-2499b"] Feb 16 23:13:14 crc kubenswrapper[4865]: I0216 23:13:14.450458 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9" path="/var/lib/kubelet/pods/05a77c1b-59aa-4d46-8bb1-2bd3027ea3e9/volumes" Feb 16 23:13:14 crc kubenswrapper[4865]: I0216 23:13:14.451150 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3421511-356b-4a61-ac23-fa0915c8a6df" path="/var/lib/kubelet/pods/b3421511-356b-4a61-ac23-fa0915c8a6df/volumes" Feb 16 23:13:15 crc kubenswrapper[4865]: I0216 23:13:15.046474 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2287x"] Feb 16 23:13:15 crc kubenswrapper[4865]: I0216 23:13:15.060037 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-93e7-account-create-update-2v9fc"] Feb 16 23:13:15 crc kubenswrapper[4865]: I0216 23:13:15.073665 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2287x"] Feb 16 23:13:15 crc kubenswrapper[4865]: I0216 23:13:15.088327 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-93e7-account-create-update-2v9fc"] Feb 16 23:13:16 crc kubenswrapper[4865]: I0216 23:13:16.426729 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49da7ade-9fd4-4cb1-a47a-4a07a038a7e8" path="/var/lib/kubelet/pods/49da7ade-9fd4-4cb1-a47a-4a07a038a7e8/volumes" Feb 16 23:13:16 crc kubenswrapper[4865]: I0216 23:13:16.427914 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade" path="/var/lib/kubelet/pods/6c7cbf3b-ff30-4ffa-a6ce-1df0143bdade/volumes" Feb 16 23:13:24 crc kubenswrapper[4865]: I0216 23:13:24.415581 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:13:24 crc kubenswrapper[4865]: E0216 23:13:24.416649 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:28 crc kubenswrapper[4865]: I0216 23:13:28.140312 4865 generic.go:334] "Generic (PLEG): container finished" podID="45600784-63ad-4273-ab6d-5732fc0988e6" containerID="65903d26af565bcee2fca7d78c77eb02f6b6984cec87efb67e1eab1f38fa5fdc" exitCode=0 Feb 16 23:13:28 crc kubenswrapper[4865]: I0216 23:13:28.140578 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" event={"ID":"45600784-63ad-4273-ab6d-5732fc0988e6","Type":"ContainerDied","Data":"65903d26af565bcee2fca7d78c77eb02f6b6984cec87efb67e1eab1f38fa5fdc"} Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.560646 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.599655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory\") pod \"45600784-63ad-4273-ab6d-5732fc0988e6\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.599970 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam\") pod \"45600784-63ad-4273-ab6d-5732fc0988e6\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.600047 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f588g\" (UniqueName: \"kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g\") pod \"45600784-63ad-4273-ab6d-5732fc0988e6\" (UID: \"45600784-63ad-4273-ab6d-5732fc0988e6\") " Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.615972 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g" (OuterVolumeSpecName: "kube-api-access-f588g") pod "45600784-63ad-4273-ab6d-5732fc0988e6" (UID: "45600784-63ad-4273-ab6d-5732fc0988e6"). InnerVolumeSpecName "kube-api-access-f588g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.626953 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory" (OuterVolumeSpecName: "inventory") pod "45600784-63ad-4273-ab6d-5732fc0988e6" (UID: "45600784-63ad-4273-ab6d-5732fc0988e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.632924 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45600784-63ad-4273-ab6d-5732fc0988e6" (UID: "45600784-63ad-4273-ab6d-5732fc0988e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.703027 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.703112 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45600784-63ad-4273-ab6d-5732fc0988e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:13:29 crc kubenswrapper[4865]: I0216 23:13:29.703132 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f588g\" (UniqueName: \"kubernetes.io/projected/45600784-63ad-4273-ab6d-5732fc0988e6-kube-api-access-f588g\") on node \"crc\" DevicePath \"\"" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.052787 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j2d77"] Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.066120 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j2d77"] Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.158848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" event={"ID":"45600784-63ad-4273-ab6d-5732fc0988e6","Type":"ContainerDied","Data":"28be2c10c47bcbf1f95040196612faef987294cb4dd966659da35c451d47ec10"} Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.159109 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28be2c10c47bcbf1f95040196612faef987294cb4dd966659da35c451d47ec10" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.158887 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.269004 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh"] Feb 16 23:13:30 crc kubenswrapper[4865]: E0216 23:13:30.269444 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45600784-63ad-4273-ab6d-5732fc0988e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.269468 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45600784-63ad-4273-ab6d-5732fc0988e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.269738 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="45600784-63ad-4273-ab6d-5732fc0988e6" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.270464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.272495 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.272581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.272729 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.272797 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.279808 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh"] Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.313311 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b8r7\" (UniqueName: \"kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.313566 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.313666 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.415478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b8r7\" (UniqueName: \"kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.415609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.415656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.423607 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.424168 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.434134 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b8r7\" (UniqueName: \"kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.439299 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6284b9-07ef-4e41-b832-48c1addaf092" path="/var/lib/kubelet/pods/ce6284b9-07ef-4e41-b832-48c1addaf092/volumes" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.585164 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.951148 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh"] Feb 16 23:13:30 crc kubenswrapper[4865]: I0216 23:13:30.953246 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:13:31 crc kubenswrapper[4865]: I0216 23:13:31.169088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" event={"ID":"4e39dd59-456f-42dd-bc53-254730e44297","Type":"ContainerStarted","Data":"5898480f9c2538171e254554ef0cc84ef2501db1d651b1436cf7fa879bda8fbb"} Feb 16 23:13:32 crc kubenswrapper[4865]: I0216 23:13:32.184446 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" event={"ID":"4e39dd59-456f-42dd-bc53-254730e44297","Type":"ContainerStarted","Data":"cb43b8decebae3a3dda6ed50c67b802dfe3948edde4635e6fe2df49a9756d9c2"} Feb 16 23:13:32 crc kubenswrapper[4865]: I0216 23:13:32.216903 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" podStartSLOduration=1.761857718 podStartE2EDuration="2.216875655s" podCreationTimestamp="2026-02-16 23:13:30 +0000 UTC" firstStartedPulling="2026-02-16 23:13:30.952837312 +0000 UTC m=+1651.276544313" lastFinishedPulling="2026-02-16 23:13:31.407855289 +0000 UTC m=+1651.731562250" observedRunningTime="2026-02-16 23:13:32.203405716 +0000 UTC m=+1652.527112717" watchObservedRunningTime="2026-02-16 23:13:32.216875655 +0000 UTC m=+1652.540582616" Feb 16 23:13:35 crc kubenswrapper[4865]: I0216 23:13:35.059520 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2dd9q"] Feb 16 23:13:35 crc kubenswrapper[4865]: I0216 23:13:35.080660 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2dd9q"] Feb 16 23:13:35 crc kubenswrapper[4865]: I0216 23:13:35.414851 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:13:35 crc kubenswrapper[4865]: E0216 23:13:35.415399 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:36 crc kubenswrapper[4865]: I0216 23:13:36.426187 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c44ae1-9f36-4df8-ba88-443ca78fe47a" path="/var/lib/kubelet/pods/38c44ae1-9f36-4df8-ba88-443ca78fe47a/volumes" Feb 16 23:13:48 crc kubenswrapper[4865]: I0216 23:13:48.414707 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:13:48 crc kubenswrapper[4865]: E0216 23:13:48.415459 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.069399 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z9hq9"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.086170 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d54b-account-create-update-zmm8f"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.096370 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fd20-account-create-update-glcp6"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.106113 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z9hq9"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.115507 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bb7dg"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.122251 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d54b-account-create-update-zmm8f"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.128331 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fd20-account-create-update-glcp6"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.134684 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a307-account-create-update-fccrh"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.158650 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-h7whk"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.170710 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-h7whk"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.183076 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a307-account-create-update-fccrh"] Feb 16 23:13:55 crc kubenswrapper[4865]: I0216 23:13:55.192646 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bb7dg"] Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.430253 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df01bc8-2180-4799-b5bc-786690440fca" path="/var/lib/kubelet/pods/1df01bc8-2180-4799-b5bc-786690440fca/volumes" Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.430892 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277094e2-0b6c-42e2-bcc6-d6afebb1bec1" path="/var/lib/kubelet/pods/277094e2-0b6c-42e2-bcc6-d6afebb1bec1/volumes" Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.431463 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bfd6f5-8048-45ee-a942-dd66d72bcf0e" path="/var/lib/kubelet/pods/d7bfd6f5-8048-45ee-a942-dd66d72bcf0e/volumes" Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.431976 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9bf5a97-df75-436c-b2bf-6b64b55a071e" path="/var/lib/kubelet/pods/d9bf5a97-df75-436c-b2bf-6b64b55a071e/volumes" Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.433194 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5828317-93f7-47f3-9769-8dae9b438530" path="/var/lib/kubelet/pods/f5828317-93f7-47f3-9769-8dae9b438530/volumes" Feb 16 23:13:56 crc kubenswrapper[4865]: I0216 23:13:56.433723 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f977bdae-bdb8-4a49-83e1-55e7264f274b" path="/var/lib/kubelet/pods/f977bdae-bdb8-4a49-83e1-55e7264f274b/volumes" Feb 16 23:14:01 crc kubenswrapper[4865]: I0216 23:14:01.064739 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4mdbk"] Feb 16 23:14:01 crc kubenswrapper[4865]: I0216 23:14:01.080203 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4mdbk"] Feb 16 23:14:01 crc kubenswrapper[4865]: I0216 23:14:01.414892 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:14:01 crc kubenswrapper[4865]: E0216 23:14:01.415520 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:14:02 crc kubenswrapper[4865]: I0216 23:14:02.432929 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9477b3a0-b4e2-4315-ba8a-37d389880da9" path="/var/lib/kubelet/pods/9477b3a0-b4e2-4315-ba8a-37d389880da9/volumes" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.566114 4865 scope.go:117] "RemoveContainer" containerID="d1795669094c5dddb6f15dff9d5695e6bdf3dc959b6848bde525cbd6406b7037" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.603183 4865 scope.go:117] "RemoveContainer" containerID="cae98d5541f321e40ea17149133353d092fe67ab34a2fec1e45f3053bc089a81" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.667545 4865 scope.go:117] "RemoveContainer" containerID="a9c4de3fb21556c2aaa1d161168bfe835b1b22b7a92fbd9783a629a2de02b2f7" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.715447 4865 scope.go:117] "RemoveContainer" containerID="eddcaedee1dd4f4cf42b4f6c00dcce90f2a9ee23321de00874638723e79f0fc0" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.757454 4865 scope.go:117] "RemoveContainer" containerID="1fbbce23d24ee24a5d38655a3a93db576cd95ad4bb64e1fb1eadb6a57f62e920" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.794636 4865 scope.go:117] "RemoveContainer" containerID="041b610c14d73a77e0e73b4c24896939b5d7abc09da5dafb68e4ce47448e797e" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.852547 4865 scope.go:117] "RemoveContainer" containerID="e068f2a49367efebfcb339f4649dd487ab7cfb5f3dacc179a05e14c59a051bbd" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.877834 4865 scope.go:117] "RemoveContainer" containerID="d984368f85c6b1a069862f46133da4addc17253548488e07fb5f6f11a31d3021" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.915961 4865 scope.go:117] "RemoveContainer" containerID="3562625e809c35f46d69a71d749fa03e03709ac25ea9b3b159440c97dbb824ff" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.944598 4865 scope.go:117] "RemoveContainer" containerID="33fbbf14f436417711a6ace16dae3386e12d3e281feb2cc7fe189550261e1121" Feb 16 23:14:03 crc kubenswrapper[4865]: I0216 23:14:03.977423 4865 scope.go:117] "RemoveContainer" containerID="eb2e9d72b9438742c436514ace7803be9fce59aefe308d0d4fe76fa6bac731a2" Feb 16 23:14:04 crc kubenswrapper[4865]: I0216 23:14:04.020102 4865 scope.go:117] "RemoveContainer" containerID="4cae800704f62e9c08d6da536231ab752df90fa63f0ca29785323930658b5fa6" Feb 16 23:14:04 crc kubenswrapper[4865]: I0216 23:14:04.051468 4865 scope.go:117] "RemoveContainer" containerID="f8a285d945dc4cf31031cf9a42178f0042c375a65b7829b914be4fe5b2fefa5e" Feb 16 23:14:04 crc kubenswrapper[4865]: I0216 23:14:04.085077 4865 scope.go:117] "RemoveContainer" containerID="52ba73fd47f97c391b846beb424304dafec4565dd8010251cba005c761d0644d" Feb 16 23:14:04 crc kubenswrapper[4865]: I0216 23:14:04.122632 4865 scope.go:117] "RemoveContainer" containerID="b7e42c485150ac43fbed280d237151298f364036252a06e541f4198b79fdf37f" Feb 16 23:14:16 crc kubenswrapper[4865]: I0216 23:14:16.416429 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:14:16 crc kubenswrapper[4865]: E0216 23:14:16.417456 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:14:29 crc kubenswrapper[4865]: I0216 23:14:29.414804 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:14:29 crc kubenswrapper[4865]: E0216 23:14:29.415711 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:14:32 crc kubenswrapper[4865]: I0216 23:14:32.057072 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zqq74"] Feb 16 23:14:32 crc kubenswrapper[4865]: I0216 23:14:32.063863 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zqq74"] Feb 16 23:14:32 crc kubenswrapper[4865]: I0216 23:14:32.428575 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e46137-244f-44c7-ac8d-450c4e8e2fff" path="/var/lib/kubelet/pods/a6e46137-244f-44c7-ac8d-450c4e8e2fff/volumes" Feb 16 23:14:41 crc kubenswrapper[4865]: I0216 23:14:41.136805 4865 generic.go:334] "Generic (PLEG): container finished" podID="4e39dd59-456f-42dd-bc53-254730e44297" containerID="cb43b8decebae3a3dda6ed50c67b802dfe3948edde4635e6fe2df49a9756d9c2" exitCode=0 Feb 16 23:14:41 crc kubenswrapper[4865]: I0216 23:14:41.137043 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" event={"ID":"4e39dd59-456f-42dd-bc53-254730e44297","Type":"ContainerDied","Data":"cb43b8decebae3a3dda6ed50c67b802dfe3948edde4635e6fe2df49a9756d9c2"} Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.706547 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.756449 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam\") pod \"4e39dd59-456f-42dd-bc53-254730e44297\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.756698 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b8r7\" (UniqueName: \"kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7\") pod \"4e39dd59-456f-42dd-bc53-254730e44297\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.756774 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory\") pod \"4e39dd59-456f-42dd-bc53-254730e44297\" (UID: \"4e39dd59-456f-42dd-bc53-254730e44297\") " Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.765585 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7" (OuterVolumeSpecName: "kube-api-access-6b8r7") pod "4e39dd59-456f-42dd-bc53-254730e44297" (UID: "4e39dd59-456f-42dd-bc53-254730e44297"). InnerVolumeSpecName "kube-api-access-6b8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.793517 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory" (OuterVolumeSpecName: "inventory") pod "4e39dd59-456f-42dd-bc53-254730e44297" (UID: "4e39dd59-456f-42dd-bc53-254730e44297"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.806071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e39dd59-456f-42dd-bc53-254730e44297" (UID: "4e39dd59-456f-42dd-bc53-254730e44297"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.859528 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.859578 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b8r7\" (UniqueName: \"kubernetes.io/projected/4e39dd59-456f-42dd-bc53-254730e44297-kube-api-access-6b8r7\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:42 crc kubenswrapper[4865]: I0216 23:14:42.859601 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e39dd59-456f-42dd-bc53-254730e44297-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.164150 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" event={"ID":"4e39dd59-456f-42dd-bc53-254730e44297","Type":"ContainerDied","Data":"5898480f9c2538171e254554ef0cc84ef2501db1d651b1436cf7fa879bda8fbb"} Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.164199 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5898480f9c2538171e254554ef0cc84ef2501db1d651b1436cf7fa879bda8fbb" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.164377 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.319159 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx"] Feb 16 23:14:43 crc kubenswrapper[4865]: E0216 23:14:43.319766 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e39dd59-456f-42dd-bc53-254730e44297" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.319795 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e39dd59-456f-42dd-bc53-254730e44297" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.320192 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e39dd59-456f-42dd-bc53-254730e44297" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.321162 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.323524 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.323542 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.323996 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.328199 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.329353 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx"] Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.369457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7ls\" (UniqueName: \"kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.369947 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.370136 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.472677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.472745 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.472812 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7ls\" (UniqueName: \"kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.479252 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.485028 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.511794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7ls\" (UniqueName: \"kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bplzx\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:43 crc kubenswrapper[4865]: I0216 23:14:43.651196 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:44 crc kubenswrapper[4865]: I0216 23:14:44.222955 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx"] Feb 16 23:14:44 crc kubenswrapper[4865]: I0216 23:14:44.415687 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:14:44 crc kubenswrapper[4865]: E0216 23:14:44.416364 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:14:45 crc kubenswrapper[4865]: I0216 23:14:45.185799 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" event={"ID":"9cb0e39e-0d5d-4758-a44e-06867bdf08da","Type":"ContainerStarted","Data":"3eaa229bca92df50000a4c074f9f2a0123a5cecba57d45e1bd29f4fa4d2e0e76"} Feb 16 23:14:45 crc kubenswrapper[4865]: I0216 23:14:45.186256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" event={"ID":"9cb0e39e-0d5d-4758-a44e-06867bdf08da","Type":"ContainerStarted","Data":"648f495af9c6a6b882f4e884aac3a0d0b95b68b8aaaaa2c2bf73d51e1dc805d8"} Feb 16 23:14:45 crc kubenswrapper[4865]: I0216 23:14:45.206443 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" podStartSLOduration=1.709682849 podStartE2EDuration="2.206416377s" podCreationTimestamp="2026-02-16 23:14:43 +0000 UTC" firstStartedPulling="2026-02-16 23:14:44.217884726 +0000 UTC m=+1724.541591717" lastFinishedPulling="2026-02-16 23:14:44.714618274 +0000 UTC m=+1725.038325245" observedRunningTime="2026-02-16 23:14:45.202549352 +0000 UTC m=+1725.526256323" watchObservedRunningTime="2026-02-16 23:14:45.206416377 +0000 UTC m=+1725.530123338" Feb 16 23:14:47 crc kubenswrapper[4865]: I0216 23:14:47.054851 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t75c2"] Feb 16 23:14:47 crc kubenswrapper[4865]: I0216 23:14:47.074726 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ww67n"] Feb 16 23:14:47 crc kubenswrapper[4865]: I0216 23:14:47.087773 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t75c2"] Feb 16 23:14:47 crc kubenswrapper[4865]: I0216 23:14:47.098551 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ww67n"] Feb 16 23:14:48 crc kubenswrapper[4865]: I0216 23:14:48.441210 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0a70da-4482-4ab9-8503-c324267212fa" path="/var/lib/kubelet/pods/6d0a70da-4482-4ab9-8503-c324267212fa/volumes" Feb 16 23:14:48 crc kubenswrapper[4865]: I0216 23:14:48.443470 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c180ab1a-1202-492c-ab2e-57c2232d8b64" path="/var/lib/kubelet/pods/c180ab1a-1202-492c-ab2e-57c2232d8b64/volumes" Feb 16 23:14:50 crc kubenswrapper[4865]: I0216 23:14:50.244907 4865 generic.go:334] "Generic (PLEG): container finished" podID="9cb0e39e-0d5d-4758-a44e-06867bdf08da" containerID="3eaa229bca92df50000a4c074f9f2a0123a5cecba57d45e1bd29f4fa4d2e0e76" exitCode=0 Feb 16 23:14:50 crc kubenswrapper[4865]: I0216 23:14:50.244963 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" event={"ID":"9cb0e39e-0d5d-4758-a44e-06867bdf08da","Type":"ContainerDied","Data":"3eaa229bca92df50000a4c074f9f2a0123a5cecba57d45e1bd29f4fa4d2e0e76"} Feb 16 23:14:51 crc kubenswrapper[4865]: I0216 23:14:51.764740 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:51 crc kubenswrapper[4865]: I0216 23:14:51.967202 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory\") pod \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " Feb 16 23:14:51 crc kubenswrapper[4865]: I0216 23:14:51.967259 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam\") pod \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " Feb 16 23:14:51 crc kubenswrapper[4865]: I0216 23:14:51.967471 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7ls\" (UniqueName: \"kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls\") pod \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\" (UID: \"9cb0e39e-0d5d-4758-a44e-06867bdf08da\") " Feb 16 23:14:51 crc kubenswrapper[4865]: I0216 23:14:51.972576 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls" (OuterVolumeSpecName: "kube-api-access-9t7ls") pod "9cb0e39e-0d5d-4758-a44e-06867bdf08da" (UID: "9cb0e39e-0d5d-4758-a44e-06867bdf08da"). InnerVolumeSpecName "kube-api-access-9t7ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.012436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory" (OuterVolumeSpecName: "inventory") pod "9cb0e39e-0d5d-4758-a44e-06867bdf08da" (UID: "9cb0e39e-0d5d-4758-a44e-06867bdf08da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.020095 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9cb0e39e-0d5d-4758-a44e-06867bdf08da" (UID: "9cb0e39e-0d5d-4758-a44e-06867bdf08da"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.069651 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.069844 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9cb0e39e-0d5d-4758-a44e-06867bdf08da-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.070073 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7ls\" (UniqueName: \"kubernetes.io/projected/9cb0e39e-0d5d-4758-a44e-06867bdf08da-kube-api-access-9t7ls\") on node \"crc\" DevicePath \"\"" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.270450 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" event={"ID":"9cb0e39e-0d5d-4758-a44e-06867bdf08da","Type":"ContainerDied","Data":"648f495af9c6a6b882f4e884aac3a0d0b95b68b8aaaaa2c2bf73d51e1dc805d8"} Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.270519 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bplzx" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.270548 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648f495af9c6a6b882f4e884aac3a0d0b95b68b8aaaaa2c2bf73d51e1dc805d8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.351403 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8"] Feb 16 23:14:52 crc kubenswrapper[4865]: E0216 23:14:52.351922 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb0e39e-0d5d-4758-a44e-06867bdf08da" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.351947 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb0e39e-0d5d-4758-a44e-06867bdf08da" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.352149 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb0e39e-0d5d-4758-a44e-06867bdf08da" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.352870 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.354793 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.354861 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.354794 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.355108 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.367212 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8"] Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.481907 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.482358 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.482533 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74lc\" (UniqueName: \"kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.584915 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.585134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.585232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74lc\" (UniqueName: \"kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.593904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.595577 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.617639 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74lc\" (UniqueName: \"kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-blwr8\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:52 crc kubenswrapper[4865]: I0216 23:14:52.678906 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:14:53 crc kubenswrapper[4865]: I0216 23:14:53.222520 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8"] Feb 16 23:14:53 crc kubenswrapper[4865]: I0216 23:14:53.280258 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" event={"ID":"d3a477d8-8710-4da3-b229-8787e3787f46","Type":"ContainerStarted","Data":"2ec2c9bafd750d094cd06c184becd11ca59626b2aa931fdf4680ef652714f4d0"} Feb 16 23:14:54 crc kubenswrapper[4865]: I0216 23:14:54.294904 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" event={"ID":"d3a477d8-8710-4da3-b229-8787e3787f46","Type":"ContainerStarted","Data":"93c2e36bc841eee5c1db2326e02dc884f949be4e5d8a12259e1b68e3c0ba9c93"} Feb 16 23:14:54 crc kubenswrapper[4865]: I0216 23:14:54.321430 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" podStartSLOduration=1.950942588 podStartE2EDuration="2.321389659s" podCreationTimestamp="2026-02-16 23:14:52 +0000 UTC" firstStartedPulling="2026-02-16 23:14:53.224190113 +0000 UTC m=+1733.547897074" lastFinishedPulling="2026-02-16 23:14:53.594637184 +0000 UTC m=+1733.918344145" observedRunningTime="2026-02-16 23:14:54.315313733 +0000 UTC m=+1734.639020734" watchObservedRunningTime="2026-02-16 23:14:54.321389659 +0000 UTC m=+1734.645096670" Feb 16 23:14:57 crc kubenswrapper[4865]: I0216 23:14:57.414889 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:14:57 crc kubenswrapper[4865]: E0216 23:14:57.415485 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:14:58 crc kubenswrapper[4865]: I0216 23:14:58.034234 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-g9mg8"] Feb 16 23:14:58 crc kubenswrapper[4865]: I0216 23:14:58.043321 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-g9mg8"] Feb 16 23:14:58 crc kubenswrapper[4865]: I0216 23:14:58.431503 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43aca4e-9612-43a8-8af2-5f32e4378af7" path="/var/lib/kubelet/pods/c43aca4e-9612-43a8-8af2-5f32e4378af7/volumes" Feb 16 23:14:59 crc kubenswrapper[4865]: I0216 23:14:59.048603 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-94wth"] Feb 16 23:14:59 crc kubenswrapper[4865]: I0216 23:14:59.058311 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-94wth"] Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.138895 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz"] Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.142671 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.146029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.146441 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.155888 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz"] Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.260687 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.260836 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85qv\" (UniqueName: \"kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.261421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.365339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.365471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85qv\" (UniqueName: \"kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.365645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.368780 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.371869 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.377375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.390244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85qv\" (UniqueName: \"kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv\") pod \"collect-profiles-29521395-8wjrz\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.435670 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d6bce0-a0b1-485b-b3fc-6c47cd966129" path="/var/lib/kubelet/pods/68d6bce0-a0b1-485b-b3fc-6c47cd966129/volumes" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.468530 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.476424 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:00 crc kubenswrapper[4865]: I0216 23:15:00.971886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz"] Feb 16 23:15:00 crc kubenswrapper[4865]: W0216 23:15:00.975483 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89334696_02d5_4418_9636_bd35a8581ab8.slice/crio-8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec WatchSource:0}: Error finding container 8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec: Status 404 returned error can't find the container with id 8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec Feb 16 23:15:01 crc kubenswrapper[4865]: I0216 23:15:01.368275 4865 generic.go:334] "Generic (PLEG): container finished" podID="89334696-02d5-4418-9636-bd35a8581ab8" containerID="bf286e2c33332f2ab4309df9b20e22022f7ccd518170bd2a0014a36b6c1e4271" exitCode=0 Feb 16 23:15:01 crc kubenswrapper[4865]: I0216 23:15:01.368521 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" event={"ID":"89334696-02d5-4418-9636-bd35a8581ab8","Type":"ContainerDied","Data":"bf286e2c33332f2ab4309df9b20e22022f7ccd518170bd2a0014a36b6c1e4271"} Feb 16 23:15:01 crc kubenswrapper[4865]: I0216 23:15:01.368625 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" event={"ID":"89334696-02d5-4418-9636-bd35a8581ab8","Type":"ContainerStarted","Data":"8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec"} Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.730783 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.814236 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume\") pod \"89334696-02d5-4418-9636-bd35a8581ab8\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.822118 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89334696-02d5-4418-9636-bd35a8581ab8" (UID: "89334696-02d5-4418-9636-bd35a8581ab8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.916511 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume\") pod \"89334696-02d5-4418-9636-bd35a8581ab8\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.916783 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85qv\" (UniqueName: \"kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv\") pod \"89334696-02d5-4418-9636-bd35a8581ab8\" (UID: \"89334696-02d5-4418-9636-bd35a8581ab8\") " Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.917381 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89334696-02d5-4418-9636-bd35a8581ab8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.917863 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume" (OuterVolumeSpecName: "config-volume") pod "89334696-02d5-4418-9636-bd35a8581ab8" (UID: "89334696-02d5-4418-9636-bd35a8581ab8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:15:02 crc kubenswrapper[4865]: I0216 23:15:02.921542 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv" (OuterVolumeSpecName: "kube-api-access-z85qv") pod "89334696-02d5-4418-9636-bd35a8581ab8" (UID: "89334696-02d5-4418-9636-bd35a8581ab8"). InnerVolumeSpecName "kube-api-access-z85qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:15:03 crc kubenswrapper[4865]: I0216 23:15:03.019803 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85qv\" (UniqueName: \"kubernetes.io/projected/89334696-02d5-4418-9636-bd35a8581ab8-kube-api-access-z85qv\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:03 crc kubenswrapper[4865]: I0216 23:15:03.019846 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89334696-02d5-4418-9636-bd35a8581ab8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:03 crc kubenswrapper[4865]: I0216 23:15:03.392908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" event={"ID":"89334696-02d5-4418-9636-bd35a8581ab8","Type":"ContainerDied","Data":"8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec"} Feb 16 23:15:03 crc kubenswrapper[4865]: I0216 23:15:03.393395 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7d81c17c680e3cb4c0052b210cd2324ef5c8aff6ff8404a615af4375def0ec" Feb 16 23:15:03 crc kubenswrapper[4865]: I0216 23:15:03.392967 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz" Feb 16 23:15:04 crc kubenswrapper[4865]: I0216 23:15:04.481019 4865 scope.go:117] "RemoveContainer" containerID="4b751e9d2f95652c660fe978c63733e22fb371d03c9aa3da519badc182b53af5" Feb 16 23:15:04 crc kubenswrapper[4865]: I0216 23:15:04.540029 4865 scope.go:117] "RemoveContainer" containerID="8b89cf994607ccbbc6f108879c3934e6c009b9745b40ebbd220533f5c1652bc5" Feb 16 23:15:04 crc kubenswrapper[4865]: I0216 23:15:04.585825 4865 scope.go:117] "RemoveContainer" containerID="e1b05654316e4786cb81e24e1b0dde7418ed9441a4a91edb86ac5c3868d92144" Feb 16 23:15:04 crc kubenswrapper[4865]: I0216 23:15:04.627931 4865 scope.go:117] "RemoveContainer" containerID="e1a190f3fe17f7742d386d08d221ee5860fa36adae67f45314b13277ccb337a3" Feb 16 23:15:04 crc kubenswrapper[4865]: I0216 23:15:04.669768 4865 scope.go:117] "RemoveContainer" containerID="32a78a1a3f9db9c4a052781a3e0e4b04aa7ed029f695352d2b528b1e59891fa5" Feb 16 23:15:12 crc kubenswrapper[4865]: I0216 23:15:12.414975 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:15:12 crc kubenswrapper[4865]: E0216 23:15:12.415819 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:15:25 crc kubenswrapper[4865]: I0216 23:15:25.415635 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:15:25 crc kubenswrapper[4865]: E0216 23:15:25.416583 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:15:30 crc kubenswrapper[4865]: I0216 23:15:30.720691 4865 generic.go:334] "Generic (PLEG): container finished" podID="d3a477d8-8710-4da3-b229-8787e3787f46" containerID="93c2e36bc841eee5c1db2326e02dc884f949be4e5d8a12259e1b68e3c0ba9c93" exitCode=0 Feb 16 23:15:30 crc kubenswrapper[4865]: I0216 23:15:30.720902 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" event={"ID":"d3a477d8-8710-4da3-b229-8787e3787f46","Type":"ContainerDied","Data":"93c2e36bc841eee5c1db2326e02dc884f949be4e5d8a12259e1b68e3c0ba9c93"} Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.226112 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.362330 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory\") pod \"d3a477d8-8710-4da3-b229-8787e3787f46\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.362604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam\") pod \"d3a477d8-8710-4da3-b229-8787e3787f46\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.362692 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k74lc\" (UniqueName: \"kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc\") pod \"d3a477d8-8710-4da3-b229-8787e3787f46\" (UID: \"d3a477d8-8710-4da3-b229-8787e3787f46\") " Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.368880 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc" (OuterVolumeSpecName: "kube-api-access-k74lc") pod "d3a477d8-8710-4da3-b229-8787e3787f46" (UID: "d3a477d8-8710-4da3-b229-8787e3787f46"). InnerVolumeSpecName "kube-api-access-k74lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.393942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory" (OuterVolumeSpecName: "inventory") pod "d3a477d8-8710-4da3-b229-8787e3787f46" (UID: "d3a477d8-8710-4da3-b229-8787e3787f46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.396419 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3a477d8-8710-4da3-b229-8787e3787f46" (UID: "d3a477d8-8710-4da3-b229-8787e3787f46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.466320 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.466560 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a477d8-8710-4da3-b229-8787e3787f46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.466714 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k74lc\" (UniqueName: \"kubernetes.io/projected/d3a477d8-8710-4da3-b229-8787e3787f46-kube-api-access-k74lc\") on node \"crc\" DevicePath \"\"" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.747447 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" event={"ID":"d3a477d8-8710-4da3-b229-8787e3787f46","Type":"ContainerDied","Data":"2ec2c9bafd750d094cd06c184becd11ca59626b2aa931fdf4680ef652714f4d0"} Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.747497 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec2c9bafd750d094cd06c184becd11ca59626b2aa931fdf4680ef652714f4d0" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.747583 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-blwr8" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.884480 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6"] Feb 16 23:15:32 crc kubenswrapper[4865]: E0216 23:15:32.885155 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a477d8-8710-4da3-b229-8787e3787f46" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.885186 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a477d8-8710-4da3-b229-8787e3787f46" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:15:32 crc kubenswrapper[4865]: E0216 23:15:32.885220 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89334696-02d5-4418-9636-bd35a8581ab8" containerName="collect-profiles" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.885233 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89334696-02d5-4418-9636-bd35a8581ab8" containerName="collect-profiles" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.885653 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a477d8-8710-4da3-b229-8787e3787f46" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.885681 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89334696-02d5-4418-9636-bd35a8581ab8" containerName="collect-profiles" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.886838 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.892716 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.892880 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.892728 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.895217 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.903915 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6"] Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.977214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ln8\" (UniqueName: \"kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.977575 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:32 crc kubenswrapper[4865]: I0216 23:15:32.977787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.080177 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.080630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ln8\" (UniqueName: \"kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.080866 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.085914 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.086882 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.098826 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ln8\" (UniqueName: \"kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.207241 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:15:33 crc kubenswrapper[4865]: I0216 23:15:33.791713 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6"] Feb 16 23:15:33 crc kubenswrapper[4865]: W0216 23:15:33.797417 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod058417d9_13ea_48ba_8bf8_2cdf141c94b6.slice/crio-590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3 WatchSource:0}: Error finding container 590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3: Status 404 returned error can't find the container with id 590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3 Feb 16 23:15:34 crc kubenswrapper[4865]: I0216 23:15:34.770540 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" event={"ID":"058417d9-13ea-48ba-8bf8-2cdf141c94b6","Type":"ContainerStarted","Data":"8e602c996065cb02f1c0fd7b6a8b25df765e1f827f62a0b0e34e13fc850fad1d"} Feb 16 23:15:34 crc kubenswrapper[4865]: I0216 23:15:34.770894 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" event={"ID":"058417d9-13ea-48ba-8bf8-2cdf141c94b6","Type":"ContainerStarted","Data":"590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3"} Feb 16 23:15:34 crc kubenswrapper[4865]: I0216 23:15:34.803495 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" podStartSLOduration=2.395810254 podStartE2EDuration="2.803466104s" podCreationTimestamp="2026-02-16 23:15:32 +0000 UTC" firstStartedPulling="2026-02-16 23:15:33.803324605 +0000 UTC m=+1774.127031566" lastFinishedPulling="2026-02-16 23:15:34.210980425 +0000 UTC m=+1774.534687416" observedRunningTime="2026-02-16 23:15:34.797387728 +0000 UTC m=+1775.121094699" watchObservedRunningTime="2026-02-16 23:15:34.803466104 +0000 UTC m=+1775.127173065" Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.060084 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hzhxn"] Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.072226 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5de9-account-create-update-q7nmp"] Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.080972 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hzhxn"] Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.089395 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5de9-account-create-update-q7nmp"] Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.425233 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35227f0d-ff5a-4c4c-a160-35a7743d4ca2" path="/var/lib/kubelet/pods/35227f0d-ff5a-4c4c-a160-35a7743d4ca2/volumes" Feb 16 23:15:36 crc kubenswrapper[4865]: I0216 23:15:36.426017 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a" path="/var/lib/kubelet/pods/f3649fa7-8af6-4f1a-9bc1-b5c10d874d4a/volumes" Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.040728 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-989f-account-create-update-l8qhk"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.055632 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-szrzm"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.070144 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ss76d"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.082242 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-993e-account-create-update-dfp4n"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.092952 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-989f-account-create-update-l8qhk"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.103124 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-szrzm"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.111607 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ss76d"] Feb 16 23:15:37 crc kubenswrapper[4865]: I0216 23:15:37.118722 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-993e-account-create-update-dfp4n"] Feb 16 23:15:38 crc kubenswrapper[4865]: I0216 23:15:38.415068 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:15:38 crc kubenswrapper[4865]: E0216 23:15:38.415855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:15:38 crc kubenswrapper[4865]: I0216 23:15:38.438322 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e310dc7-ddd2-4a29-97a2-b071095d9966" path="/var/lib/kubelet/pods/0e310dc7-ddd2-4a29-97a2-b071095d9966/volumes" Feb 16 23:15:38 crc kubenswrapper[4865]: I0216 23:15:38.439550 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54" path="/var/lib/kubelet/pods/5ca4e99c-e5c8-49f6-bf1b-dd6b73576b54/volumes" Feb 16 23:15:38 crc kubenswrapper[4865]: I0216 23:15:38.440461 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adff2e0d-89a3-423c-8e83-a16b64c67a82" path="/var/lib/kubelet/pods/adff2e0d-89a3-423c-8e83-a16b64c67a82/volumes" Feb 16 23:15:38 crc kubenswrapper[4865]: I0216 23:15:38.441422 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d03aa28b-05a9-4123-a616-c1713e81c63c" path="/var/lib/kubelet/pods/d03aa28b-05a9-4123-a616-c1713e81c63c/volumes" Feb 16 23:15:51 crc kubenswrapper[4865]: I0216 23:15:51.415058 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:15:51 crc kubenswrapper[4865]: E0216 23:15:51.416047 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:16:04 crc kubenswrapper[4865]: I0216 23:16:04.415042 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:16:04 crc kubenswrapper[4865]: E0216 23:16:04.415933 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:16:04 crc kubenswrapper[4865]: I0216 23:16:04.860813 4865 scope.go:117] "RemoveContainer" containerID="731001ba5fc6d780f7537d9c40d65cd0e160502dce37324e02c0aa3b30036b57" Feb 16 23:16:04 crc kubenswrapper[4865]: I0216 23:16:04.896313 4865 scope.go:117] "RemoveContainer" containerID="8c1b8e6a1d1f0a28fbae98382c6971880cf44620c0e3fc79dde5ca84de2d728b" Feb 16 23:16:04 crc kubenswrapper[4865]: I0216 23:16:04.978595 4865 scope.go:117] "RemoveContainer" containerID="19453e668301688ebe1519413da8fe166ac42e2b4ed73371e826413d7617e86c" Feb 16 23:16:05 crc kubenswrapper[4865]: I0216 23:16:05.014493 4865 scope.go:117] "RemoveContainer" containerID="94db81abead0c9d2365141696b1ef28cd4daba518c77da221a90a97fd98fd3df" Feb 16 23:16:05 crc kubenswrapper[4865]: I0216 23:16:05.050467 4865 scope.go:117] "RemoveContainer" containerID="bc1d544410058ad3864770b8bb36697871ea409cb09ca09067eb2b1e5b265782" Feb 16 23:16:05 crc kubenswrapper[4865]: I0216 23:16:05.101836 4865 scope.go:117] "RemoveContainer" containerID="fc3c3b541452931c7597b37d3c344fc52e8df0db452a421e657423d6cb9a1682" Feb 16 23:16:06 crc kubenswrapper[4865]: I0216 23:16:06.070104 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rm49l"] Feb 16 23:16:06 crc kubenswrapper[4865]: I0216 23:16:06.086585 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rm49l"] Feb 16 23:16:06 crc kubenswrapper[4865]: I0216 23:16:06.432364 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d1006d-8c29-497a-8957-91fc74d71fe8" path="/var/lib/kubelet/pods/79d1006d-8c29-497a-8957-91fc74d71fe8/volumes" Feb 16 23:16:16 crc kubenswrapper[4865]: I0216 23:16:16.415216 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:16:16 crc kubenswrapper[4865]: E0216 23:16:16.416701 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:16:25 crc kubenswrapper[4865]: I0216 23:16:25.319673 4865 generic.go:334] "Generic (PLEG): container finished" podID="058417d9-13ea-48ba-8bf8-2cdf141c94b6" containerID="8e602c996065cb02f1c0fd7b6a8b25df765e1f827f62a0b0e34e13fc850fad1d" exitCode=0 Feb 16 23:16:25 crc kubenswrapper[4865]: I0216 23:16:25.319761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" event={"ID":"058417d9-13ea-48ba-8bf8-2cdf141c94b6","Type":"ContainerDied","Data":"8e602c996065cb02f1c0fd7b6a8b25df765e1f827f62a0b0e34e13fc850fad1d"} Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.825895 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.922299 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory\") pod \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.922552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72ln8\" (UniqueName: \"kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8\") pod \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.922631 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam\") pod \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\" (UID: \"058417d9-13ea-48ba-8bf8-2cdf141c94b6\") " Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.928832 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8" (OuterVolumeSpecName: "kube-api-access-72ln8") pod "058417d9-13ea-48ba-8bf8-2cdf141c94b6" (UID: "058417d9-13ea-48ba-8bf8-2cdf141c94b6"). InnerVolumeSpecName "kube-api-access-72ln8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.949800 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory" (OuterVolumeSpecName: "inventory") pod "058417d9-13ea-48ba-8bf8-2cdf141c94b6" (UID: "058417d9-13ea-48ba-8bf8-2cdf141c94b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:26 crc kubenswrapper[4865]: I0216 23:16:26.962980 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "058417d9-13ea-48ba-8bf8-2cdf141c94b6" (UID: "058417d9-13ea-48ba-8bf8-2cdf141c94b6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.025113 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72ln8\" (UniqueName: \"kubernetes.io/projected/058417d9-13ea-48ba-8bf8-2cdf141c94b6-kube-api-access-72ln8\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.025167 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.025186 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/058417d9-13ea-48ba-8bf8-2cdf141c94b6-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.346726 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" event={"ID":"058417d9-13ea-48ba-8bf8-2cdf141c94b6","Type":"ContainerDied","Data":"590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3"} Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.346791 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590458afea33460e6bf09836f8483c6432e2ab80eb6a5c403b4d2b15cbe130d3" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.346823 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.443766 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdfb7"] Feb 16 23:16:27 crc kubenswrapper[4865]: E0216 23:16:27.444264 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058417d9-13ea-48ba-8bf8-2cdf141c94b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.444305 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="058417d9-13ea-48ba-8bf8-2cdf141c94b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.444599 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="058417d9-13ea-48ba-8bf8-2cdf141c94b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.445422 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.448501 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.449402 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.449844 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.450111 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.451863 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdfb7"] Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.536078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.536297 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgw7s\" (UniqueName: \"kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.536374 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.638310 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgw7s\" (UniqueName: \"kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.638449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.638578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.642697 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.648346 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.659267 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgw7s\" (UniqueName: \"kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s\") pod \"ssh-known-hosts-edpm-deployment-wdfb7\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:27 crc kubenswrapper[4865]: I0216 23:16:27.773990 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:28 crc kubenswrapper[4865]: I0216 23:16:28.370323 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdfb7"] Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.086785 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jzjtf"] Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.097371 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7lw7w"] Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.107880 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jzjtf"] Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.117906 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7lw7w"] Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.372783 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" event={"ID":"15b21ee4-d297-4297-9752-c0642717510e","Type":"ContainerStarted","Data":"698587e480e12a64717db2cc834b1fc8d7404d8976eed717bc3b1f0c82b95b1e"} Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.372827 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" event={"ID":"15b21ee4-d297-4297-9752-c0642717510e","Type":"ContainerStarted","Data":"d53b9c88549ad85903f168e030f7c579e41d2f7d0cecd6571a6bda7c0e3dbcd5"} Feb 16 23:16:29 crc kubenswrapper[4865]: I0216 23:16:29.411291 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" podStartSLOduration=1.890845761 podStartE2EDuration="2.411249783s" podCreationTimestamp="2026-02-16 23:16:27 +0000 UTC" firstStartedPulling="2026-02-16 23:16:28.380031871 +0000 UTC m=+1828.703738842" lastFinishedPulling="2026-02-16 23:16:28.900435893 +0000 UTC m=+1829.224142864" observedRunningTime="2026-02-16 23:16:29.405543312 +0000 UTC m=+1829.729250283" watchObservedRunningTime="2026-02-16 23:16:29.411249783 +0000 UTC m=+1829.734956754" Feb 16 23:16:30 crc kubenswrapper[4865]: I0216 23:16:30.437070 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a7bbbd-389f-49cb-b8b4-4a54280d034a" path="/var/lib/kubelet/pods/e5a7bbbd-389f-49cb-b8b4-4a54280d034a/volumes" Feb 16 23:16:30 crc kubenswrapper[4865]: I0216 23:16:30.438272 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0920ef-18c3-4e01-b206-08b31472078a" path="/var/lib/kubelet/pods/ee0920ef-18c3-4e01-b206-08b31472078a/volumes" Feb 16 23:16:31 crc kubenswrapper[4865]: I0216 23:16:31.414233 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:16:31 crc kubenswrapper[4865]: E0216 23:16:31.414896 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:16:36 crc kubenswrapper[4865]: I0216 23:16:36.452519 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" event={"ID":"15b21ee4-d297-4297-9752-c0642717510e","Type":"ContainerDied","Data":"698587e480e12a64717db2cc834b1fc8d7404d8976eed717bc3b1f0c82b95b1e"} Feb 16 23:16:36 crc kubenswrapper[4865]: I0216 23:16:36.452459 4865 generic.go:334] "Generic (PLEG): container finished" podID="15b21ee4-d297-4297-9752-c0642717510e" containerID="698587e480e12a64717db2cc834b1fc8d7404d8976eed717bc3b1f0c82b95b1e" exitCode=0 Feb 16 23:16:37 crc kubenswrapper[4865]: I0216 23:16:37.945229 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.088071 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgw7s\" (UniqueName: \"kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s\") pod \"15b21ee4-d297-4297-9752-c0642717510e\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.088187 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0\") pod \"15b21ee4-d297-4297-9752-c0642717510e\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.088371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam\") pod \"15b21ee4-d297-4297-9752-c0642717510e\" (UID: \"15b21ee4-d297-4297-9752-c0642717510e\") " Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.097274 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s" (OuterVolumeSpecName: "kube-api-access-jgw7s") pod "15b21ee4-d297-4297-9752-c0642717510e" (UID: "15b21ee4-d297-4297-9752-c0642717510e"). InnerVolumeSpecName "kube-api-access-jgw7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.139643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "15b21ee4-d297-4297-9752-c0642717510e" (UID: "15b21ee4-d297-4297-9752-c0642717510e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.139765 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15b21ee4-d297-4297-9752-c0642717510e" (UID: "15b21ee4-d297-4297-9752-c0642717510e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.190709 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgw7s\" (UniqueName: \"kubernetes.io/projected/15b21ee4-d297-4297-9752-c0642717510e-kube-api-access-jgw7s\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.190960 4865 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.191049 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15b21ee4-d297-4297-9752-c0642717510e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.478456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" event={"ID":"15b21ee4-d297-4297-9752-c0642717510e","Type":"ContainerDied","Data":"d53b9c88549ad85903f168e030f7c579e41d2f7d0cecd6571a6bda7c0e3dbcd5"} Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.478514 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53b9c88549ad85903f168e030f7c579e41d2f7d0cecd6571a6bda7c0e3dbcd5" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.478513 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdfb7" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.567119 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt"] Feb 16 23:16:38 crc kubenswrapper[4865]: E0216 23:16:38.567647 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b21ee4-d297-4297-9752-c0642717510e" containerName="ssh-known-hosts-edpm-deployment" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.567670 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b21ee4-d297-4297-9752-c0642717510e" containerName="ssh-known-hosts-edpm-deployment" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.567950 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b21ee4-d297-4297-9752-c0642717510e" containerName="ssh-known-hosts-edpm-deployment" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.568729 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.578900 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.579028 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.579135 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.579248 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.586210 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt"] Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.701589 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.701982 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4pbs\" (UniqueName: \"kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.702090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.804824 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.804908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4pbs\" (UniqueName: \"kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.804987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.811114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.811330 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.821992 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4pbs\" (UniqueName: \"kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lwkkt\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:38 crc kubenswrapper[4865]: I0216 23:16:38.898206 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:39 crc kubenswrapper[4865]: I0216 23:16:39.472709 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt"] Feb 16 23:16:40 crc kubenswrapper[4865]: I0216 23:16:40.495776 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" event={"ID":"86d1001f-6633-4b05-8a8f-cee820027d08","Type":"ContainerStarted","Data":"40f324f7613a0757ba2ddacd9b92e5ba6fca3deb69e7fc604c73ab4d997c0221"} Feb 16 23:16:40 crc kubenswrapper[4865]: I0216 23:16:40.497045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" event={"ID":"86d1001f-6633-4b05-8a8f-cee820027d08","Type":"ContainerStarted","Data":"49e215a8de69e483ecc73f396427d60c708e2609ae815d83df3ea268873d6211"} Feb 16 23:16:40 crc kubenswrapper[4865]: I0216 23:16:40.518668 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" podStartSLOduration=2.10409211 podStartE2EDuration="2.518644125s" podCreationTimestamp="2026-02-16 23:16:38 +0000 UTC" firstStartedPulling="2026-02-16 23:16:39.492372173 +0000 UTC m=+1839.816079134" lastFinishedPulling="2026-02-16 23:16:39.906924178 +0000 UTC m=+1840.230631149" observedRunningTime="2026-02-16 23:16:40.510478934 +0000 UTC m=+1840.834185915" watchObservedRunningTime="2026-02-16 23:16:40.518644125 +0000 UTC m=+1840.842351086" Feb 16 23:16:46 crc kubenswrapper[4865]: I0216 23:16:46.415211 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:16:47 crc kubenswrapper[4865]: I0216 23:16:47.567359 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59"} Feb 16 23:16:48 crc kubenswrapper[4865]: I0216 23:16:48.594787 4865 generic.go:334] "Generic (PLEG): container finished" podID="86d1001f-6633-4b05-8a8f-cee820027d08" containerID="40f324f7613a0757ba2ddacd9b92e5ba6fca3deb69e7fc604c73ab4d997c0221" exitCode=0 Feb 16 23:16:48 crc kubenswrapper[4865]: I0216 23:16:48.595182 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" event={"ID":"86d1001f-6633-4b05-8a8f-cee820027d08","Type":"ContainerDied","Data":"40f324f7613a0757ba2ddacd9b92e5ba6fca3deb69e7fc604c73ab4d997c0221"} Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.067240 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.167883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4pbs\" (UniqueName: \"kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs\") pod \"86d1001f-6633-4b05-8a8f-cee820027d08\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.167996 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam\") pod \"86d1001f-6633-4b05-8a8f-cee820027d08\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.168136 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory\") pod \"86d1001f-6633-4b05-8a8f-cee820027d08\" (UID: \"86d1001f-6633-4b05-8a8f-cee820027d08\") " Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.174033 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs" (OuterVolumeSpecName: "kube-api-access-v4pbs") pod "86d1001f-6633-4b05-8a8f-cee820027d08" (UID: "86d1001f-6633-4b05-8a8f-cee820027d08"). InnerVolumeSpecName "kube-api-access-v4pbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.200490 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory" (OuterVolumeSpecName: "inventory") pod "86d1001f-6633-4b05-8a8f-cee820027d08" (UID: "86d1001f-6633-4b05-8a8f-cee820027d08"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.211377 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86d1001f-6633-4b05-8a8f-cee820027d08" (UID: "86d1001f-6633-4b05-8a8f-cee820027d08"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.270658 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4pbs\" (UniqueName: \"kubernetes.io/projected/86d1001f-6633-4b05-8a8f-cee820027d08-kube-api-access-v4pbs\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.270705 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.270720 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86d1001f-6633-4b05-8a8f-cee820027d08-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.618378 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" event={"ID":"86d1001f-6633-4b05-8a8f-cee820027d08","Type":"ContainerDied","Data":"49e215a8de69e483ecc73f396427d60c708e2609ae815d83df3ea268873d6211"} Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.618441 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e215a8de69e483ecc73f396427d60c708e2609ae815d83df3ea268873d6211" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.618502 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lwkkt" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.706139 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f"] Feb 16 23:16:50 crc kubenswrapper[4865]: E0216 23:16:50.706537 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d1001f-6633-4b05-8a8f-cee820027d08" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.706558 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d1001f-6633-4b05-8a8f-cee820027d08" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.706751 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d1001f-6633-4b05-8a8f-cee820027d08" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.707359 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.712354 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.712626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.712785 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.714427 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.733934 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f"] Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.781630 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9b4w\" (UniqueName: \"kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.781748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.781820 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.884591 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9b4w\" (UniqueName: \"kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.885195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.885377 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.891041 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.892652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:50 crc kubenswrapper[4865]: I0216 23:16:50.906164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9b4w\" (UniqueName: \"kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s867f\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:51 crc kubenswrapper[4865]: I0216 23:16:51.030589 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:16:51 crc kubenswrapper[4865]: I0216 23:16:51.586506 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f"] Feb 16 23:16:51 crc kubenswrapper[4865]: I0216 23:16:51.634667 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" event={"ID":"c2f56f0d-1a38-4756-b13f-e961a66b7594","Type":"ContainerStarted","Data":"bd98e0b2179d9642581ce1ef7cd0718eee9c17ce89848b7ed11768e6e964c81a"} Feb 16 23:16:52 crc kubenswrapper[4865]: I0216 23:16:52.646424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" event={"ID":"c2f56f0d-1a38-4756-b13f-e961a66b7594","Type":"ContainerStarted","Data":"5e4f9dc07830b9daa592eb09001fc92f435ec8beea16eab78b99ac9c66d18202"} Feb 16 23:16:52 crc kubenswrapper[4865]: I0216 23:16:52.667695 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" podStartSLOduration=2.186303347 podStartE2EDuration="2.667669174s" podCreationTimestamp="2026-02-16 23:16:50 +0000 UTC" firstStartedPulling="2026-02-16 23:16:51.595267867 +0000 UTC m=+1851.918974848" lastFinishedPulling="2026-02-16 23:16:52.076633704 +0000 UTC m=+1852.400340675" observedRunningTime="2026-02-16 23:16:52.665006019 +0000 UTC m=+1852.988713010" watchObservedRunningTime="2026-02-16 23:16:52.667669174 +0000 UTC m=+1852.991376145" Feb 16 23:17:02 crc kubenswrapper[4865]: I0216 23:17:02.764489 4865 generic.go:334] "Generic (PLEG): container finished" podID="c2f56f0d-1a38-4756-b13f-e961a66b7594" containerID="5e4f9dc07830b9daa592eb09001fc92f435ec8beea16eab78b99ac9c66d18202" exitCode=0 Feb 16 23:17:02 crc kubenswrapper[4865]: I0216 23:17:02.764555 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" event={"ID":"c2f56f0d-1a38-4756-b13f-e961a66b7594","Type":"ContainerDied","Data":"5e4f9dc07830b9daa592eb09001fc92f435ec8beea16eab78b99ac9c66d18202"} Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.268467 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.393987 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam\") pod \"c2f56f0d-1a38-4756-b13f-e961a66b7594\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.394090 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9b4w\" (UniqueName: \"kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w\") pod \"c2f56f0d-1a38-4756-b13f-e961a66b7594\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.394176 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory\") pod \"c2f56f0d-1a38-4756-b13f-e961a66b7594\" (UID: \"c2f56f0d-1a38-4756-b13f-e961a66b7594\") " Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.402079 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w" (OuterVolumeSpecName: "kube-api-access-h9b4w") pod "c2f56f0d-1a38-4756-b13f-e961a66b7594" (UID: "c2f56f0d-1a38-4756-b13f-e961a66b7594"). InnerVolumeSpecName "kube-api-access-h9b4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.437355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory" (OuterVolumeSpecName: "inventory") pod "c2f56f0d-1a38-4756-b13f-e961a66b7594" (UID: "c2f56f0d-1a38-4756-b13f-e961a66b7594"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.465198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2f56f0d-1a38-4756-b13f-e961a66b7594" (UID: "c2f56f0d-1a38-4756-b13f-e961a66b7594"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.497608 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.497644 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9b4w\" (UniqueName: \"kubernetes.io/projected/c2f56f0d-1a38-4756-b13f-e961a66b7594-kube-api-access-h9b4w\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.497658 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2f56f0d-1a38-4756-b13f-e961a66b7594-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.790444 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" event={"ID":"c2f56f0d-1a38-4756-b13f-e961a66b7594","Type":"ContainerDied","Data":"bd98e0b2179d9642581ce1ef7cd0718eee9c17ce89848b7ed11768e6e964c81a"} Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.790505 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd98e0b2179d9642581ce1ef7cd0718eee9c17ce89848b7ed11768e6e964c81a" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.790534 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s867f" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.936818 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk"] Feb 16 23:17:04 crc kubenswrapper[4865]: E0216 23:17:04.937465 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f56f0d-1a38-4756-b13f-e961a66b7594" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.937483 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f56f0d-1a38-4756-b13f-e961a66b7594" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.937682 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f56f0d-1a38-4756-b13f-e961a66b7594" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.938329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.942190 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.942668 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.943229 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.943425 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.943607 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.947646 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.947945 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.948537 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:17:04 crc kubenswrapper[4865]: I0216 23:17:04.964673 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk"] Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111623 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111663 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111700 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111759 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqgd\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111843 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111909 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111962 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.111996 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.112018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213214 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213249 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213438 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqgd\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213486 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213595 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213777 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213835 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.213916 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.217961 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.218806 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.219109 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.219259 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.219520 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.221732 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.222819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.223092 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.223128 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.224048 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.225542 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.231932 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.238006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.239714 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqgd\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kgskk\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.262247 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.284981 4865 scope.go:117] "RemoveContainer" containerID="dc5bb7e8a6208a9d466a5c01af7b26986329345e244aaac034641eef183e7a69" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.433179 4865 scope.go:117] "RemoveContainer" containerID="a95aea6349f0ab21688e30bd9d52c833a8199feaad2386a6f4b0f78c830877a6" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.500542 4865 scope.go:117] "RemoveContainer" containerID="cc8be8ea2f4e8243c48e16f613b1c44ac6bb1df9db61237c737caeabf79e098c" Feb 16 23:17:05 crc kubenswrapper[4865]: I0216 23:17:05.906864 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk"] Feb 16 23:17:05 crc kubenswrapper[4865]: W0216 23:17:05.911303 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd41f0a_9736_4ede_8d1f_5c39bda1db42.slice/crio-04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee WatchSource:0}: Error finding container 04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee: Status 404 returned error can't find the container with id 04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee Feb 16 23:17:06 crc kubenswrapper[4865]: I0216 23:17:06.812417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" event={"ID":"9bd41f0a-9736-4ede-8d1f-5c39bda1db42","Type":"ContainerStarted","Data":"278559daf17dbc7bcc5ec8079ff8a4daaeb07a18d534df602feba62ea5ff8d17"} Feb 16 23:17:06 crc kubenswrapper[4865]: I0216 23:17:06.813545 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" event={"ID":"9bd41f0a-9736-4ede-8d1f-5c39bda1db42","Type":"ContainerStarted","Data":"04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee"} Feb 16 23:17:06 crc kubenswrapper[4865]: I0216 23:17:06.840116 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" podStartSLOduration=2.239662615 podStartE2EDuration="2.840097492s" podCreationTimestamp="2026-02-16 23:17:04 +0000 UTC" firstStartedPulling="2026-02-16 23:17:05.915645053 +0000 UTC m=+1866.239352034" lastFinishedPulling="2026-02-16 23:17:06.51607992 +0000 UTC m=+1866.839786911" observedRunningTime="2026-02-16 23:17:06.831997212 +0000 UTC m=+1867.155704183" watchObservedRunningTime="2026-02-16 23:17:06.840097492 +0000 UTC m=+1867.163804453" Feb 16 23:17:13 crc kubenswrapper[4865]: I0216 23:17:13.061508 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-779v4"] Feb 16 23:17:13 crc kubenswrapper[4865]: I0216 23:17:13.074569 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-779v4"] Feb 16 23:17:14 crc kubenswrapper[4865]: I0216 23:17:14.429206 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a147d5a-f8ee-4f4b-aebd-14f86e3547d0" path="/var/lib/kubelet/pods/6a147d5a-f8ee-4f4b-aebd-14f86e3547d0/volumes" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.750769 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.757742 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.773543 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.797001 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl67z\" (UniqueName: \"kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.797097 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.797136 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.899821 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.900189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl67z\" (UniqueName: \"kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.900254 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.900549 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.900924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:29 crc kubenswrapper[4865]: I0216 23:17:29.924437 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl67z\" (UniqueName: \"kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z\") pod \"redhat-operators-5kfm5\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:30 crc kubenswrapper[4865]: I0216 23:17:30.078110 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:30 crc kubenswrapper[4865]: I0216 23:17:30.585185 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:31 crc kubenswrapper[4865]: I0216 23:17:31.060749 4865 generic.go:334] "Generic (PLEG): container finished" podID="210b0508-783e-47e0-a374-7b665cf0dadb" containerID="3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a" exitCode=0 Feb 16 23:17:31 crc kubenswrapper[4865]: I0216 23:17:31.061345 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerDied","Data":"3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a"} Feb 16 23:17:31 crc kubenswrapper[4865]: I0216 23:17:31.061377 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerStarted","Data":"b103c8c2f7348556ac12173b7e18927223c198c6f06e990dfe04cfc1fa0b6157"} Feb 16 23:17:32 crc kubenswrapper[4865]: I0216 23:17:32.072024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerStarted","Data":"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65"} Feb 16 23:17:34 crc kubenswrapper[4865]: I0216 23:17:34.106424 4865 generic.go:334] "Generic (PLEG): container finished" podID="210b0508-783e-47e0-a374-7b665cf0dadb" containerID="27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65" exitCode=0 Feb 16 23:17:34 crc kubenswrapper[4865]: I0216 23:17:34.106537 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerDied","Data":"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65"} Feb 16 23:17:36 crc kubenswrapper[4865]: I0216 23:17:36.136823 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerStarted","Data":"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae"} Feb 16 23:17:36 crc kubenswrapper[4865]: I0216 23:17:36.158525 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5kfm5" podStartSLOduration=2.626718687 podStartE2EDuration="7.158501787s" podCreationTimestamp="2026-02-16 23:17:29 +0000 UTC" firstStartedPulling="2026-02-16 23:17:31.063067931 +0000 UTC m=+1891.386774892" lastFinishedPulling="2026-02-16 23:17:35.594851021 +0000 UTC m=+1895.918557992" observedRunningTime="2026-02-16 23:17:36.156703836 +0000 UTC m=+1896.480410847" watchObservedRunningTime="2026-02-16 23:17:36.158501787 +0000 UTC m=+1896.482208768" Feb 16 23:17:40 crc kubenswrapper[4865]: I0216 23:17:40.078954 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:40 crc kubenswrapper[4865]: I0216 23:17:40.079356 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:41 crc kubenswrapper[4865]: I0216 23:17:41.133831 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5kfm5" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="registry-server" probeResult="failure" output=< Feb 16 23:17:41 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:17:41 crc kubenswrapper[4865]: > Feb 16 23:17:44 crc kubenswrapper[4865]: I0216 23:17:44.213131 4865 generic.go:334] "Generic (PLEG): container finished" podID="9bd41f0a-9736-4ede-8d1f-5c39bda1db42" containerID="278559daf17dbc7bcc5ec8079ff8a4daaeb07a18d534df602feba62ea5ff8d17" exitCode=0 Feb 16 23:17:44 crc kubenswrapper[4865]: I0216 23:17:44.213334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" event={"ID":"9bd41f0a-9736-4ede-8d1f-5c39bda1db42","Type":"ContainerDied","Data":"278559daf17dbc7bcc5ec8079ff8a4daaeb07a18d534df602feba62ea5ff8d17"} Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.673627 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.765800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.765895 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.765931 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkqgd\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.765976 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766060 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766131 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766191 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766255 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766321 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766380 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766412 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766549 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.766604 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle\") pod \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\" (UID: \"9bd41f0a-9736-4ede-8d1f-5c39bda1db42\") " Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.774745 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.777143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.777448 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.780947 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.794624 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd" (OuterVolumeSpecName: "kube-api-access-jkqgd") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "kube-api-access-jkqgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.797004 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.797029 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.797342 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.799537 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.799895 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.800178 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.803532 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.819634 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.839062 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory" (OuterVolumeSpecName: "inventory") pod "9bd41f0a-9736-4ede-8d1f-5c39bda1db42" (UID: "9bd41f0a-9736-4ede-8d1f-5c39bda1db42"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.868921 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.868977 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.868990 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869003 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkqgd\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-kube-api-access-jkqgd\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869014 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869028 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869040 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869058 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869073 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869087 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869099 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869110 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869121 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:45 crc kubenswrapper[4865]: I0216 23:17:45.869134 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd41f0a-9736-4ede-8d1f-5c39bda1db42-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.233230 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" event={"ID":"9bd41f0a-9736-4ede-8d1f-5c39bda1db42","Type":"ContainerDied","Data":"04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee"} Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.233339 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a2dda7c6ba6e1812b5504fd9e928ca950982f610b0c8aac547fba611218dee" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.233373 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kgskk" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.368463 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2"] Feb 16 23:17:46 crc kubenswrapper[4865]: E0216 23:17:46.368954 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd41f0a-9736-4ede-8d1f-5c39bda1db42" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.368976 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd41f0a-9736-4ede-8d1f-5c39bda1db42" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.369202 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd41f0a-9736-4ede-8d1f-5c39bda1db42" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.369955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.372150 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.373360 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.374832 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.380632 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.385053 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.390037 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2"] Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.481502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.481558 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkj65\" (UniqueName: \"kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.481615 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.481644 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.481717 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.584600 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.584650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkj65\" (UniqueName: \"kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.584712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.584747 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.584881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.586029 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.590161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.590473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.594173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.614824 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkj65\" (UniqueName: \"kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cbbp2\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:46 crc kubenswrapper[4865]: I0216 23:17:46.697615 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:17:47 crc kubenswrapper[4865]: I0216 23:17:47.304042 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2"] Feb 16 23:17:48 crc kubenswrapper[4865]: I0216 23:17:48.255259 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" event={"ID":"1c70b630-7dee-4749-9903-9d0f2e3b9196","Type":"ContainerStarted","Data":"881c00d623e74cf159d46dcde5085cd87310663fecb145a38c99ceeee59d2901"} Feb 16 23:17:48 crc kubenswrapper[4865]: I0216 23:17:48.255560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" event={"ID":"1c70b630-7dee-4749-9903-9d0f2e3b9196","Type":"ContainerStarted","Data":"8e18256acac6291e47ce61f9166a28845d5d97ad33dd283d76965df919dd6465"} Feb 16 23:17:48 crc kubenswrapper[4865]: I0216 23:17:48.275264 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" podStartSLOduration=1.797915014 podStartE2EDuration="2.275239325s" podCreationTimestamp="2026-02-16 23:17:46 +0000 UTC" firstStartedPulling="2026-02-16 23:17:47.313261725 +0000 UTC m=+1907.636968726" lastFinishedPulling="2026-02-16 23:17:47.790586076 +0000 UTC m=+1908.114293037" observedRunningTime="2026-02-16 23:17:48.270041358 +0000 UTC m=+1908.593748319" watchObservedRunningTime="2026-02-16 23:17:48.275239325 +0000 UTC m=+1908.598946286" Feb 16 23:17:50 crc kubenswrapper[4865]: I0216 23:17:50.131556 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:50 crc kubenswrapper[4865]: I0216 23:17:50.188735 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:50 crc kubenswrapper[4865]: I0216 23:17:50.373919 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.283230 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5kfm5" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="registry-server" containerID="cri-o://39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae" gracePeriod=2 Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.770199 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.902481 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities\") pod \"210b0508-783e-47e0-a374-7b665cf0dadb\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.902654 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content\") pod \"210b0508-783e-47e0-a374-7b665cf0dadb\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.902712 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl67z\" (UniqueName: \"kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z\") pod \"210b0508-783e-47e0-a374-7b665cf0dadb\" (UID: \"210b0508-783e-47e0-a374-7b665cf0dadb\") " Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.903524 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities" (OuterVolumeSpecName: "utilities") pod "210b0508-783e-47e0-a374-7b665cf0dadb" (UID: "210b0508-783e-47e0-a374-7b665cf0dadb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:17:51 crc kubenswrapper[4865]: I0216 23:17:51.911860 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z" (OuterVolumeSpecName: "kube-api-access-sl67z") pod "210b0508-783e-47e0-a374-7b665cf0dadb" (UID: "210b0508-783e-47e0-a374-7b665cf0dadb"). InnerVolumeSpecName "kube-api-access-sl67z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.007082 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.007222 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl67z\" (UniqueName: \"kubernetes.io/projected/210b0508-783e-47e0-a374-7b665cf0dadb-kube-api-access-sl67z\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.038643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "210b0508-783e-47e0-a374-7b665cf0dadb" (UID: "210b0508-783e-47e0-a374-7b665cf0dadb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.109077 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210b0508-783e-47e0-a374-7b665cf0dadb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.294347 4865 generic.go:334] "Generic (PLEG): container finished" podID="210b0508-783e-47e0-a374-7b665cf0dadb" containerID="39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae" exitCode=0 Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.294411 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfm5" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.294410 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerDied","Data":"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae"} Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.294481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfm5" event={"ID":"210b0508-783e-47e0-a374-7b665cf0dadb","Type":"ContainerDied","Data":"b103c8c2f7348556ac12173b7e18927223c198c6f06e990dfe04cfc1fa0b6157"} Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.294535 4865 scope.go:117] "RemoveContainer" containerID="39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.336978 4865 scope.go:117] "RemoveContainer" containerID="27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.348341 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.358782 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5kfm5"] Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.368018 4865 scope.go:117] "RemoveContainer" containerID="3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.427890 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" path="/var/lib/kubelet/pods/210b0508-783e-47e0-a374-7b665cf0dadb/volumes" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.438528 4865 scope.go:117] "RemoveContainer" containerID="39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae" Feb 16 23:17:52 crc kubenswrapper[4865]: E0216 23:17:52.439194 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae\": container with ID starting with 39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae not found: ID does not exist" containerID="39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.439246 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae"} err="failed to get container status \"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae\": rpc error: code = NotFound desc = could not find container \"39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae\": container with ID starting with 39a3e738a5b6767906d201a64dff9cca4587cc5535735a612698a14707544aae not found: ID does not exist" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.439304 4865 scope.go:117] "RemoveContainer" containerID="27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65" Feb 16 23:17:52 crc kubenswrapper[4865]: E0216 23:17:52.439756 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65\": container with ID starting with 27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65 not found: ID does not exist" containerID="27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.439804 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65"} err="failed to get container status \"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65\": rpc error: code = NotFound desc = could not find container \"27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65\": container with ID starting with 27817340e253210b99541ef161bb1e069d6775f21dfb4e3bf76e42458f509a65 not found: ID does not exist" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.439839 4865 scope.go:117] "RemoveContainer" containerID="3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a" Feb 16 23:17:52 crc kubenswrapper[4865]: E0216 23:17:52.440582 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a\": container with ID starting with 3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a not found: ID does not exist" containerID="3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a" Feb 16 23:17:52 crc kubenswrapper[4865]: I0216 23:17:52.440622 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a"} err="failed to get container status \"3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a\": rpc error: code = NotFound desc = could not find container \"3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a\": container with ID starting with 3d8b5b09a7c4dc57808e79d414f2b4562cca72a42a96c9497164f8b6eb24db1a not found: ID does not exist" Feb 16 23:18:05 crc kubenswrapper[4865]: I0216 23:18:05.629593 4865 scope.go:117] "RemoveContainer" containerID="689c26d0fa6d575a8f46a41f235dc5173c43a77f88315618d872a3de9fdf7b36" Feb 16 23:18:50 crc kubenswrapper[4865]: I0216 23:18:50.926481 4865 generic.go:334] "Generic (PLEG): container finished" podID="1c70b630-7dee-4749-9903-9d0f2e3b9196" containerID="881c00d623e74cf159d46dcde5085cd87310663fecb145a38c99ceeee59d2901" exitCode=0 Feb 16 23:18:50 crc kubenswrapper[4865]: I0216 23:18:50.926596 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" event={"ID":"1c70b630-7dee-4749-9903-9d0f2e3b9196","Type":"ContainerDied","Data":"881c00d623e74cf159d46dcde5085cd87310663fecb145a38c99ceeee59d2901"} Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.452556 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.466605 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkj65\" (UniqueName: \"kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65\") pod \"1c70b630-7dee-4749-9903-9d0f2e3b9196\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.466832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam\") pod \"1c70b630-7dee-4749-9903-9d0f2e3b9196\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.466896 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0\") pod \"1c70b630-7dee-4749-9903-9d0f2e3b9196\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.467029 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle\") pod \"1c70b630-7dee-4749-9903-9d0f2e3b9196\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.467156 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory\") pod \"1c70b630-7dee-4749-9903-9d0f2e3b9196\" (UID: \"1c70b630-7dee-4749-9903-9d0f2e3b9196\") " Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.474091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1c70b630-7dee-4749-9903-9d0f2e3b9196" (UID: "1c70b630-7dee-4749-9903-9d0f2e3b9196"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.476473 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65" (OuterVolumeSpecName: "kube-api-access-gkj65") pod "1c70b630-7dee-4749-9903-9d0f2e3b9196" (UID: "1c70b630-7dee-4749-9903-9d0f2e3b9196"). InnerVolumeSpecName "kube-api-access-gkj65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.500540 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1c70b630-7dee-4749-9903-9d0f2e3b9196" (UID: "1c70b630-7dee-4749-9903-9d0f2e3b9196"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.503130 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c70b630-7dee-4749-9903-9d0f2e3b9196" (UID: "1c70b630-7dee-4749-9903-9d0f2e3b9196"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.506022 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory" (OuterVolumeSpecName: "inventory") pod "1c70b630-7dee-4749-9903-9d0f2e3b9196" (UID: "1c70b630-7dee-4749-9903-9d0f2e3b9196"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.571426 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.571475 4865 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.571493 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.571508 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c70b630-7dee-4749-9903-9d0f2e3b9196-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.571521 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkj65\" (UniqueName: \"kubernetes.io/projected/1c70b630-7dee-4749-9903-9d0f2e3b9196-kube-api-access-gkj65\") on node \"crc\" DevicePath \"\"" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.947499 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" event={"ID":"1c70b630-7dee-4749-9903-9d0f2e3b9196","Type":"ContainerDied","Data":"8e18256acac6291e47ce61f9166a28845d5d97ad33dd283d76965df919dd6465"} Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.947555 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e18256acac6291e47ce61f9166a28845d5d97ad33dd283d76965df919dd6465" Feb 16 23:18:52 crc kubenswrapper[4865]: I0216 23:18:52.947611 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cbbp2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.125675 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2"] Feb 16 23:18:53 crc kubenswrapper[4865]: E0216 23:18:53.126139 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="extract-content" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126163 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="extract-content" Feb 16 23:18:53 crc kubenswrapper[4865]: E0216 23:18:53.126190 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="registry-server" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126199 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="registry-server" Feb 16 23:18:53 crc kubenswrapper[4865]: E0216 23:18:53.126233 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="extract-utilities" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126241 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="extract-utilities" Feb 16 23:18:53 crc kubenswrapper[4865]: E0216 23:18:53.126248 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c70b630-7dee-4749-9903-9d0f2e3b9196" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126255 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c70b630-7dee-4749-9903-9d0f2e3b9196" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126572 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c70b630-7dee-4749-9903-9d0f2e3b9196" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.126589 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="210b0508-783e-47e0-a374-7b665cf0dadb" containerName="registry-server" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.127385 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.130114 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.130970 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.131894 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.131931 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.132202 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.132904 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.133548 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2"] Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182020 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182125 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182477 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmjs\" (UniqueName: \"kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.182635 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284521 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284629 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.284679 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmjs\" (UniqueName: \"kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.288975 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.291205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.292248 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.292303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.298135 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.301813 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmjs\" (UniqueName: \"kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:53 crc kubenswrapper[4865]: I0216 23:18:53.445413 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:18:54 crc kubenswrapper[4865]: I0216 23:18:54.010210 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2"] Feb 16 23:18:54 crc kubenswrapper[4865]: I0216 23:18:54.014402 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:18:54 crc kubenswrapper[4865]: I0216 23:18:54.966514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" event={"ID":"ea192f95-6e32-46e1-ac67-715417874376","Type":"ContainerStarted","Data":"d25c9a9195678afca37414af57be51c5c63e501c289188e89127c6ce800a76c8"} Feb 16 23:18:54 crc kubenswrapper[4865]: I0216 23:18:54.967055 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" event={"ID":"ea192f95-6e32-46e1-ac67-715417874376","Type":"ContainerStarted","Data":"4cafae77cdc78c5c6c2a49ecd6557497b5e0daa8463379b34d915762bacf56f5"} Feb 16 23:18:54 crc kubenswrapper[4865]: I0216 23:18:54.999244 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" podStartSLOduration=1.512964088 podStartE2EDuration="1.999212371s" podCreationTimestamp="2026-02-16 23:18:53 +0000 UTC" firstStartedPulling="2026-02-16 23:18:54.014127617 +0000 UTC m=+1974.337834578" lastFinishedPulling="2026-02-16 23:18:54.50037589 +0000 UTC m=+1974.824082861" observedRunningTime="2026-02-16 23:18:54.985691358 +0000 UTC m=+1975.309398339" watchObservedRunningTime="2026-02-16 23:18:54.999212371 +0000 UTC m=+1975.322919362" Feb 16 23:19:15 crc kubenswrapper[4865]: I0216 23:19:15.664740 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:19:15 crc kubenswrapper[4865]: I0216 23:19:15.666004 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:19:42 crc kubenswrapper[4865]: I0216 23:19:42.529588 4865 generic.go:334] "Generic (PLEG): container finished" podID="ea192f95-6e32-46e1-ac67-715417874376" containerID="d25c9a9195678afca37414af57be51c5c63e501c289188e89127c6ce800a76c8" exitCode=0 Feb 16 23:19:42 crc kubenswrapper[4865]: I0216 23:19:42.529768 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" event={"ID":"ea192f95-6e32-46e1-ac67-715417874376","Type":"ContainerDied","Data":"d25c9a9195678afca37414af57be51c5c63e501c289188e89127c6ce800a76c8"} Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.021820 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124233 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmjs\" (UniqueName: \"kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124329 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124391 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124422 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124532 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.124586 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle\") pod \"ea192f95-6e32-46e1-ac67-715417874376\" (UID: \"ea192f95-6e32-46e1-ac67-715417874376\") " Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.130862 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs" (OuterVolumeSpecName: "kube-api-access-dbmjs") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "kube-api-access-dbmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.135464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.156711 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.157627 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.160484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.161172 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory" (OuterVolumeSpecName: "inventory") pod "ea192f95-6e32-46e1-ac67-715417874376" (UID: "ea192f95-6e32-46e1-ac67-715417874376"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229066 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbmjs\" (UniqueName: \"kubernetes.io/projected/ea192f95-6e32-46e1-ac67-715417874376-kube-api-access-dbmjs\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229098 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229108 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229120 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229134 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.229146 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea192f95-6e32-46e1-ac67-715417874376-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.553689 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" event={"ID":"ea192f95-6e32-46e1-ac67-715417874376","Type":"ContainerDied","Data":"4cafae77cdc78c5c6c2a49ecd6557497b5e0daa8463379b34d915762bacf56f5"} Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.554035 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cafae77cdc78c5c6c2a49ecd6557497b5e0daa8463379b34d915762bacf56f5" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.553800 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.673679 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl"] Feb 16 23:19:44 crc kubenswrapper[4865]: E0216 23:19:44.674266 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea192f95-6e32-46e1-ac67-715417874376" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.674318 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea192f95-6e32-46e1-ac67-715417874376" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.674684 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea192f95-6e32-46e1-ac67-715417874376" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.675781 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.678000 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.678069 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.678607 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.679553 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.681425 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.686250 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl"] Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.737844 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.738018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.738097 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.738363 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.738497 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2qb\" (UniqueName: \"kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.840254 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.840354 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.840483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.840576 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.840631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl2qb\" (UniqueName: \"kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.845125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.845374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.845595 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.847545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.864124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl2qb\" (UniqueName: \"kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:44 crc kubenswrapper[4865]: I0216 23:19:44.995799 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:19:45 crc kubenswrapper[4865]: I0216 23:19:45.591974 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl"] Feb 16 23:19:45 crc kubenswrapper[4865]: I0216 23:19:45.664779 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:19:45 crc kubenswrapper[4865]: I0216 23:19:45.664876 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:19:46 crc kubenswrapper[4865]: I0216 23:19:46.576660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" event={"ID":"de77d0a7-2fdd-48d9-a2ba-827deafc0437","Type":"ContainerStarted","Data":"86760c4fa9f3efe7952848d96b7b3d2648582a500a58f3b807b73f6150f203e3"} Feb 16 23:19:46 crc kubenswrapper[4865]: I0216 23:19:46.577043 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" event={"ID":"de77d0a7-2fdd-48d9-a2ba-827deafc0437","Type":"ContainerStarted","Data":"32d6feca2976148b76fe683be836404ab497933e4a588bd79827f6cfeb133422"} Feb 16 23:19:46 crc kubenswrapper[4865]: I0216 23:19:46.605428 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" podStartSLOduration=2.206342702 podStartE2EDuration="2.605401618s" podCreationTimestamp="2026-02-16 23:19:44 +0000 UTC" firstStartedPulling="2026-02-16 23:19:45.587591627 +0000 UTC m=+2025.911298608" lastFinishedPulling="2026-02-16 23:19:45.986650533 +0000 UTC m=+2026.310357524" observedRunningTime="2026-02-16 23:19:46.595139438 +0000 UTC m=+2026.918846409" watchObservedRunningTime="2026-02-16 23:19:46.605401618 +0000 UTC m=+2026.929108589" Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.663867 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.664485 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.664537 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.665616 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.665690 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59" gracePeriod=600 Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.899885 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59" exitCode=0 Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.899933 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59"} Feb 16 23:20:15 crc kubenswrapper[4865]: I0216 23:20:15.899968 4865 scope.go:117] "RemoveContainer" containerID="5cd852104406ee38cbc84897bbc06d76f939393ce558cd88a802e395e3a4e39b" Feb 16 23:20:16 crc kubenswrapper[4865]: I0216 23:20:16.910128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd"} Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.337803 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.341597 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.355044 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.492824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.492935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzt2\" (UniqueName: \"kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.493013 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.594269 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.594401 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzt2\" (UniqueName: \"kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.594481 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.594863 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.595354 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.627892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzt2\" (UniqueName: \"kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2\") pod \"certified-operators-lcrvb\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:53 crc kubenswrapper[4865]: I0216 23:20:53.722272 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:20:54 crc kubenswrapper[4865]: W0216 23:20:54.204373 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a844c1c_176e_4ff6_b709_cf314d405a8e.slice/crio-9a09077395dd296da52ec491f52276b8ce688a4b249415b5bf2df8cce057a207 WatchSource:0}: Error finding container 9a09077395dd296da52ec491f52276b8ce688a4b249415b5bf2df8cce057a207: Status 404 returned error can't find the container with id 9a09077395dd296da52ec491f52276b8ce688a4b249415b5bf2df8cce057a207 Feb 16 23:20:54 crc kubenswrapper[4865]: I0216 23:20:54.207174 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:20:54 crc kubenswrapper[4865]: I0216 23:20:54.436051 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerStarted","Data":"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5"} Feb 16 23:20:54 crc kubenswrapper[4865]: I0216 23:20:54.436546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerStarted","Data":"9a09077395dd296da52ec491f52276b8ce688a4b249415b5bf2df8cce057a207"} Feb 16 23:20:55 crc kubenswrapper[4865]: I0216 23:20:55.440198 4865 generic.go:334] "Generic (PLEG): container finished" podID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerID="6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5" exitCode=0 Feb 16 23:20:55 crc kubenswrapper[4865]: I0216 23:20:55.440293 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerDied","Data":"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5"} Feb 16 23:20:55 crc kubenswrapper[4865]: I0216 23:20:55.440607 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerStarted","Data":"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6"} Feb 16 23:20:56 crc kubenswrapper[4865]: I0216 23:20:56.453371 4865 generic.go:334] "Generic (PLEG): container finished" podID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerID="123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6" exitCode=0 Feb 16 23:20:56 crc kubenswrapper[4865]: I0216 23:20:56.453423 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerDied","Data":"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6"} Feb 16 23:20:57 crc kubenswrapper[4865]: I0216 23:20:57.466671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerStarted","Data":"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a"} Feb 16 23:20:57 crc kubenswrapper[4865]: I0216 23:20:57.488437 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lcrvb" podStartSLOduration=2.043601138 podStartE2EDuration="4.488414163s" podCreationTimestamp="2026-02-16 23:20:53 +0000 UTC" firstStartedPulling="2026-02-16 23:20:54.425419309 +0000 UTC m=+2094.749126270" lastFinishedPulling="2026-02-16 23:20:56.870232304 +0000 UTC m=+2097.193939295" observedRunningTime="2026-02-16 23:20:57.483086632 +0000 UTC m=+2097.806793613" watchObservedRunningTime="2026-02-16 23:20:57.488414163 +0000 UTC m=+2097.812121124" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.150157 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.153499 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.162776 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.259659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.259737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5hl\" (UniqueName: \"kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.260128 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.362549 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.362688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.362746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5hl\" (UniqueName: \"kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.363338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.363400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.391562 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5hl\" (UniqueName: \"kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl\") pod \"community-operators-cdxsd\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:01 crc kubenswrapper[4865]: I0216 23:21:01.491581 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:02 crc kubenswrapper[4865]: I0216 23:21:02.015473 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:02 crc kubenswrapper[4865]: W0216 23:21:02.019417 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4f50ea_682c_4dd7_89d3_fa029a47c7f7.slice/crio-945ca3abd5d727c31e687997106640bbba043e0e53d13f8e63a5ed9db78485c9 WatchSource:0}: Error finding container 945ca3abd5d727c31e687997106640bbba043e0e53d13f8e63a5ed9db78485c9: Status 404 returned error can't find the container with id 945ca3abd5d727c31e687997106640bbba043e0e53d13f8e63a5ed9db78485c9 Feb 16 23:21:02 crc kubenswrapper[4865]: I0216 23:21:02.519967 4865 generic.go:334] "Generic (PLEG): container finished" podID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerID="2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db" exitCode=0 Feb 16 23:21:02 crc kubenswrapper[4865]: I0216 23:21:02.520129 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerDied","Data":"2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db"} Feb 16 23:21:02 crc kubenswrapper[4865]: I0216 23:21:02.520317 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerStarted","Data":"945ca3abd5d727c31e687997106640bbba043e0e53d13f8e63a5ed9db78485c9"} Feb 16 23:21:03 crc kubenswrapper[4865]: I0216 23:21:03.530369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerStarted","Data":"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d"} Feb 16 23:21:03 crc kubenswrapper[4865]: I0216 23:21:03.723459 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:03 crc kubenswrapper[4865]: I0216 23:21:03.723531 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:03 crc kubenswrapper[4865]: I0216 23:21:03.792140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:04 crc kubenswrapper[4865]: I0216 23:21:04.544069 4865 generic.go:334] "Generic (PLEG): container finished" podID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerID="798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d" exitCode=0 Feb 16 23:21:04 crc kubenswrapper[4865]: I0216 23:21:04.544172 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerDied","Data":"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d"} Feb 16 23:21:04 crc kubenswrapper[4865]: I0216 23:21:04.600107 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:05 crc kubenswrapper[4865]: I0216 23:21:05.557518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerStarted","Data":"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d"} Feb 16 23:21:05 crc kubenswrapper[4865]: I0216 23:21:05.585666 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cdxsd" podStartSLOduration=2.054503722 podStartE2EDuration="4.585637451s" podCreationTimestamp="2026-02-16 23:21:01 +0000 UTC" firstStartedPulling="2026-02-16 23:21:02.521716921 +0000 UTC m=+2102.845423912" lastFinishedPulling="2026-02-16 23:21:05.05285069 +0000 UTC m=+2105.376557641" observedRunningTime="2026-02-16 23:21:05.574876047 +0000 UTC m=+2105.898583008" watchObservedRunningTime="2026-02-16 23:21:05.585637451 +0000 UTC m=+2105.909344452" Feb 16 23:21:06 crc kubenswrapper[4865]: I0216 23:21:06.085532 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:21:06 crc kubenswrapper[4865]: I0216 23:21:06.563958 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lcrvb" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="registry-server" containerID="cri-o://5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a" gracePeriod=2 Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.168234 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.309414 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hzt2\" (UniqueName: \"kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2\") pod \"9a844c1c-176e-4ff6-b709-cf314d405a8e\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.309530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities\") pod \"9a844c1c-176e-4ff6-b709-cf314d405a8e\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.309607 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content\") pod \"9a844c1c-176e-4ff6-b709-cf314d405a8e\" (UID: \"9a844c1c-176e-4ff6-b709-cf314d405a8e\") " Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.310427 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities" (OuterVolumeSpecName: "utilities") pod "9a844c1c-176e-4ff6-b709-cf314d405a8e" (UID: "9a844c1c-176e-4ff6-b709-cf314d405a8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.314831 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2" (OuterVolumeSpecName: "kube-api-access-9hzt2") pod "9a844c1c-176e-4ff6-b709-cf314d405a8e" (UID: "9a844c1c-176e-4ff6-b709-cf314d405a8e"). InnerVolumeSpecName "kube-api-access-9hzt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.373252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a844c1c-176e-4ff6-b709-cf314d405a8e" (UID: "9a844c1c-176e-4ff6-b709-cf314d405a8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.412022 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hzt2\" (UniqueName: \"kubernetes.io/projected/9a844c1c-176e-4ff6-b709-cf314d405a8e-kube-api-access-9hzt2\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.412057 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.412072 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a844c1c-176e-4ff6-b709-cf314d405a8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.575669 4865 generic.go:334] "Generic (PLEG): container finished" podID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerID="5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a" exitCode=0 Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.575887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerDied","Data":"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a"} Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.576034 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcrvb" event={"ID":"9a844c1c-176e-4ff6-b709-cf314d405a8e","Type":"ContainerDied","Data":"9a09077395dd296da52ec491f52276b8ce688a4b249415b5bf2df8cce057a207"} Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.576068 4865 scope.go:117] "RemoveContainer" containerID="5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.577752 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcrvb" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.611207 4865 scope.go:117] "RemoveContainer" containerID="123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.638201 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.648796 4865 scope.go:117] "RemoveContainer" containerID="6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.654785 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lcrvb"] Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.679347 4865 scope.go:117] "RemoveContainer" containerID="5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a" Feb 16 23:21:07 crc kubenswrapper[4865]: E0216 23:21:07.679918 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a\": container with ID starting with 5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a not found: ID does not exist" containerID="5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.679972 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a"} err="failed to get container status \"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a\": rpc error: code = NotFound desc = could not find container \"5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a\": container with ID starting with 5d34c07bd98fe5492f8da899a4af10b2392c219f68889cbf51ccd88fe8129f0a not found: ID does not exist" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.680003 4865 scope.go:117] "RemoveContainer" containerID="123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6" Feb 16 23:21:07 crc kubenswrapper[4865]: E0216 23:21:07.680619 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6\": container with ID starting with 123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6 not found: ID does not exist" containerID="123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.680641 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6"} err="failed to get container status \"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6\": rpc error: code = NotFound desc = could not find container \"123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6\": container with ID starting with 123f91031333a3a41951b65ae1bd41e15d3596345c1fae67f32100445e5723a6 not found: ID does not exist" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.680659 4865 scope.go:117] "RemoveContainer" containerID="6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5" Feb 16 23:21:07 crc kubenswrapper[4865]: E0216 23:21:07.682329 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5\": container with ID starting with 6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5 not found: ID does not exist" containerID="6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5" Feb 16 23:21:07 crc kubenswrapper[4865]: I0216 23:21:07.682513 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5"} err="failed to get container status \"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5\": rpc error: code = NotFound desc = could not find container \"6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5\": container with ID starting with 6e81cff20635168c399fccf668b55b4f7cc85ebd33cb53b590b236c7390acba5 not found: ID does not exist" Feb 16 23:21:08 crc kubenswrapper[4865]: I0216 23:21:08.434005 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" path="/var/lib/kubelet/pods/9a844c1c-176e-4ff6-b709-cf314d405a8e/volumes" Feb 16 23:21:11 crc kubenswrapper[4865]: I0216 23:21:11.493357 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:11 crc kubenswrapper[4865]: I0216 23:21:11.493940 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:11 crc kubenswrapper[4865]: I0216 23:21:11.567372 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:11 crc kubenswrapper[4865]: I0216 23:21:11.684970 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:11 crc kubenswrapper[4865]: I0216 23:21:11.819119 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:13 crc kubenswrapper[4865]: I0216 23:21:13.637799 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cdxsd" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="registry-server" containerID="cri-o://14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d" gracePeriod=2 Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.138726 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.260609 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh5hl\" (UniqueName: \"kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl\") pod \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.260746 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content\") pod \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.260797 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities\") pod \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\" (UID: \"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7\") " Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.262377 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities" (OuterVolumeSpecName: "utilities") pod "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" (UID: "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.270225 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl" (OuterVolumeSpecName: "kube-api-access-jh5hl") pod "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" (UID: "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7"). InnerVolumeSpecName "kube-api-access-jh5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.327710 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" (UID: "7a4f50ea-682c-4dd7-89d3-fa029a47c7f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.364486 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.364541 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh5hl\" (UniqueName: \"kubernetes.io/projected/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-kube-api-access-jh5hl\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.364556 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.648784 4865 generic.go:334] "Generic (PLEG): container finished" podID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerID="14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d" exitCode=0 Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.648857 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerDied","Data":"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d"} Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.648889 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cdxsd" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.648913 4865 scope.go:117] "RemoveContainer" containerID="14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.648897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cdxsd" event={"ID":"7a4f50ea-682c-4dd7-89d3-fa029a47c7f7","Type":"ContainerDied","Data":"945ca3abd5d727c31e687997106640bbba043e0e53d13f8e63a5ed9db78485c9"} Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.674954 4865 scope.go:117] "RemoveContainer" containerID="798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.681541 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.696471 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cdxsd"] Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.699485 4865 scope.go:117] "RemoveContainer" containerID="2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.750533 4865 scope.go:117] "RemoveContainer" containerID="14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d" Feb 16 23:21:14 crc kubenswrapper[4865]: E0216 23:21:14.751270 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d\": container with ID starting with 14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d not found: ID does not exist" containerID="14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.751321 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d"} err="failed to get container status \"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d\": rpc error: code = NotFound desc = could not find container \"14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d\": container with ID starting with 14d88c936d8c7e13a7cc90a584265e0d457b646d913dd80bb0fa134fc963661d not found: ID does not exist" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.751343 4865 scope.go:117] "RemoveContainer" containerID="798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d" Feb 16 23:21:14 crc kubenswrapper[4865]: E0216 23:21:14.751706 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d\": container with ID starting with 798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d not found: ID does not exist" containerID="798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.751729 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d"} err="failed to get container status \"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d\": rpc error: code = NotFound desc = could not find container \"798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d\": container with ID starting with 798475993ae3d67d95e97d5f780ec9a2e4c771a8a1b649703b40ffe821b4a71d not found: ID does not exist" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.751742 4865 scope.go:117] "RemoveContainer" containerID="2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db" Feb 16 23:21:14 crc kubenswrapper[4865]: E0216 23:21:14.752056 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db\": container with ID starting with 2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db not found: ID does not exist" containerID="2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db" Feb 16 23:21:14 crc kubenswrapper[4865]: I0216 23:21:14.752078 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db"} err="failed to get container status \"2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db\": rpc error: code = NotFound desc = could not find container \"2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db\": container with ID starting with 2b1fc998ac946567e40c37710a49e653b40db5c4b68417bea0119a29fca9e7db not found: ID does not exist" Feb 16 23:21:16 crc kubenswrapper[4865]: I0216 23:21:16.426772 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" path="/var/lib/kubelet/pods/7a4f50ea-682c-4dd7-89d3-fa029a47c7f7/volumes" Feb 16 23:22:45 crc kubenswrapper[4865]: I0216 23:22:45.664421 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:22:45 crc kubenswrapper[4865]: I0216 23:22:45.664970 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.037544 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038781 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="extract-utilities" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.038804 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="extract-utilities" Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038840 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.038853 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038873 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="extract-content" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.038888 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="extract-content" Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038912 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="extract-utilities" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.038924 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="extract-utilities" Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038952 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="extract-content" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.038965 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="extract-content" Feb 16 23:22:52 crc kubenswrapper[4865]: E0216 23:22:52.038988 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.039001 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.039408 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4f50ea-682c-4dd7-89d3-fa029a47c7f7" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.039433 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a844c1c-176e-4ff6-b709-cf314d405a8e" containerName="registry-server" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.041955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.059253 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.098637 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxmd\" (UniqueName: \"kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.099048 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.099188 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.201584 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.201664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.201746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxmd\" (UniqueName: \"kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.202229 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.202331 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.228468 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxmd\" (UniqueName: \"kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd\") pod \"redhat-marketplace-7pssw\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.402167 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:22:52 crc kubenswrapper[4865]: I0216 23:22:52.942678 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:22:53 crc kubenswrapper[4865]: I0216 23:22:53.686658 4865 generic.go:334] "Generic (PLEG): container finished" podID="d056ca47-40dd-4621-9ef3-07a982441303" containerID="cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c" exitCode=0 Feb 16 23:22:53 crc kubenswrapper[4865]: I0216 23:22:53.686708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerDied","Data":"cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c"} Feb 16 23:22:53 crc kubenswrapper[4865]: I0216 23:22:53.686737 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerStarted","Data":"30867512fcfebf713c83d05bfb2cf9e0dbce38c1504412ed012e4a17ba023219"} Feb 16 23:22:54 crc kubenswrapper[4865]: I0216 23:22:54.697745 4865 generic.go:334] "Generic (PLEG): container finished" podID="d056ca47-40dd-4621-9ef3-07a982441303" containerID="a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e" exitCode=0 Feb 16 23:22:54 crc kubenswrapper[4865]: I0216 23:22:54.697854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerDied","Data":"a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e"} Feb 16 23:22:55 crc kubenswrapper[4865]: I0216 23:22:55.725228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerStarted","Data":"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053"} Feb 16 23:22:55 crc kubenswrapper[4865]: I0216 23:22:55.753143 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7pssw" podStartSLOduration=2.362158392 podStartE2EDuration="3.753115713s" podCreationTimestamp="2026-02-16 23:22:52 +0000 UTC" firstStartedPulling="2026-02-16 23:22:53.689966516 +0000 UTC m=+2214.013673517" lastFinishedPulling="2026-02-16 23:22:55.080923837 +0000 UTC m=+2215.404630838" observedRunningTime="2026-02-16 23:22:55.750640903 +0000 UTC m=+2216.074347864" watchObservedRunningTime="2026-02-16 23:22:55.753115713 +0000 UTC m=+2216.076822694" Feb 16 23:23:02 crc kubenswrapper[4865]: I0216 23:23:02.402959 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:02 crc kubenswrapper[4865]: I0216 23:23:02.403739 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:02 crc kubenswrapper[4865]: I0216 23:23:02.479072 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:02 crc kubenswrapper[4865]: I0216 23:23:02.858226 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:02 crc kubenswrapper[4865]: I0216 23:23:02.926737 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:23:04 crc kubenswrapper[4865]: I0216 23:23:04.819395 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7pssw" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="registry-server" containerID="cri-o://fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053" gracePeriod=2 Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.316362 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.385786 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcxmd\" (UniqueName: \"kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd\") pod \"d056ca47-40dd-4621-9ef3-07a982441303\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.385867 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities\") pod \"d056ca47-40dd-4621-9ef3-07a982441303\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.385961 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content\") pod \"d056ca47-40dd-4621-9ef3-07a982441303\" (UID: \"d056ca47-40dd-4621-9ef3-07a982441303\") " Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.387106 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities" (OuterVolumeSpecName: "utilities") pod "d056ca47-40dd-4621-9ef3-07a982441303" (UID: "d056ca47-40dd-4621-9ef3-07a982441303"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.395786 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd" (OuterVolumeSpecName: "kube-api-access-jcxmd") pod "d056ca47-40dd-4621-9ef3-07a982441303" (UID: "d056ca47-40dd-4621-9ef3-07a982441303"). InnerVolumeSpecName "kube-api-access-jcxmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.430901 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d056ca47-40dd-4621-9ef3-07a982441303" (UID: "d056ca47-40dd-4621-9ef3-07a982441303"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.487587 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcxmd\" (UniqueName: \"kubernetes.io/projected/d056ca47-40dd-4621-9ef3-07a982441303-kube-api-access-jcxmd\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.487624 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.487742 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d056ca47-40dd-4621-9ef3-07a982441303-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.830617 4865 generic.go:334] "Generic (PLEG): container finished" podID="d056ca47-40dd-4621-9ef3-07a982441303" containerID="fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053" exitCode=0 Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.830681 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerDied","Data":"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053"} Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.830719 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pssw" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.830751 4865 scope.go:117] "RemoveContainer" containerID="fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.830731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pssw" event={"ID":"d056ca47-40dd-4621-9ef3-07a982441303","Type":"ContainerDied","Data":"30867512fcfebf713c83d05bfb2cf9e0dbce38c1504412ed012e4a17ba023219"} Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.857814 4865 scope.go:117] "RemoveContainer" containerID="a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.879446 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.888436 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pssw"] Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.932892 4865 scope.go:117] "RemoveContainer" containerID="cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.962038 4865 scope.go:117] "RemoveContainer" containerID="fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053" Feb 16 23:23:05 crc kubenswrapper[4865]: E0216 23:23:05.962696 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053\": container with ID starting with fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053 not found: ID does not exist" containerID="fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.962731 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053"} err="failed to get container status \"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053\": rpc error: code = NotFound desc = could not find container \"fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053\": container with ID starting with fd07a14090b0b48b603b8b9d1a4c94ce729ad1d1958084f92c08d63a70cc5053 not found: ID does not exist" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.962771 4865 scope.go:117] "RemoveContainer" containerID="a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e" Feb 16 23:23:05 crc kubenswrapper[4865]: E0216 23:23:05.963236 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e\": container with ID starting with a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e not found: ID does not exist" containerID="a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.963258 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e"} err="failed to get container status \"a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e\": rpc error: code = NotFound desc = could not find container \"a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e\": container with ID starting with a51de9f4795130313c238a6cfb695fe2e51711f7ca0bf2b244b53c8ee44af69e not found: ID does not exist" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.963297 4865 scope.go:117] "RemoveContainer" containerID="cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c" Feb 16 23:23:05 crc kubenswrapper[4865]: E0216 23:23:05.963748 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c\": container with ID starting with cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c not found: ID does not exist" containerID="cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c" Feb 16 23:23:05 crc kubenswrapper[4865]: I0216 23:23:05.963796 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c"} err="failed to get container status \"cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c\": rpc error: code = NotFound desc = could not find container \"cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c\": container with ID starting with cf8403f40c2c2b9e88f74b563a5d54578071bf1fbee8e47f139550613187a82c not found: ID does not exist" Feb 16 23:23:06 crc kubenswrapper[4865]: I0216 23:23:06.434063 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d056ca47-40dd-4621-9ef3-07a982441303" path="/var/lib/kubelet/pods/d056ca47-40dd-4621-9ef3-07a982441303/volumes" Feb 16 23:23:15 crc kubenswrapper[4865]: I0216 23:23:15.664989 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:23:15 crc kubenswrapper[4865]: I0216 23:23:15.665711 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:23:41 crc kubenswrapper[4865]: I0216 23:23:41.219741 4865 generic.go:334] "Generic (PLEG): container finished" podID="de77d0a7-2fdd-48d9-a2ba-827deafc0437" containerID="86760c4fa9f3efe7952848d96b7b3d2648582a500a58f3b807b73f6150f203e3" exitCode=0 Feb 16 23:23:41 crc kubenswrapper[4865]: I0216 23:23:41.219867 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" event={"ID":"de77d0a7-2fdd-48d9-a2ba-827deafc0437","Type":"ContainerDied","Data":"86760c4fa9f3efe7952848d96b7b3d2648582a500a58f3b807b73f6150f203e3"} Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.793396 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.936976 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle\") pod \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.937074 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory\") pod \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.937223 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam\") pod \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.937311 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl2qb\" (UniqueName: \"kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb\") pod \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.937350 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0\") pod \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\" (UID: \"de77d0a7-2fdd-48d9-a2ba-827deafc0437\") " Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.943684 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "de77d0a7-2fdd-48d9-a2ba-827deafc0437" (UID: "de77d0a7-2fdd-48d9-a2ba-827deafc0437"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.945221 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb" (OuterVolumeSpecName: "kube-api-access-wl2qb") pod "de77d0a7-2fdd-48d9-a2ba-827deafc0437" (UID: "de77d0a7-2fdd-48d9-a2ba-827deafc0437"). InnerVolumeSpecName "kube-api-access-wl2qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.973099 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "de77d0a7-2fdd-48d9-a2ba-827deafc0437" (UID: "de77d0a7-2fdd-48d9-a2ba-827deafc0437"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.975663 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory" (OuterVolumeSpecName: "inventory") pod "de77d0a7-2fdd-48d9-a2ba-827deafc0437" (UID: "de77d0a7-2fdd-48d9-a2ba-827deafc0437"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:23:42 crc kubenswrapper[4865]: I0216 23:23:42.979920 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de77d0a7-2fdd-48d9-a2ba-827deafc0437" (UID: "de77d0a7-2fdd-48d9-a2ba-827deafc0437"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.040230 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.040262 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl2qb\" (UniqueName: \"kubernetes.io/projected/de77d0a7-2fdd-48d9-a2ba-827deafc0437-kube-api-access-wl2qb\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.040288 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.040301 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.040313 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de77d0a7-2fdd-48d9-a2ba-827deafc0437-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.244102 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" event={"ID":"de77d0a7-2fdd-48d9-a2ba-827deafc0437","Type":"ContainerDied","Data":"32d6feca2976148b76fe683be836404ab497933e4a588bd79827f6cfeb133422"} Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.244187 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.244393 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d6feca2976148b76fe683be836404ab497933e4a588bd79827f6cfeb133422" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364181 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd"] Feb 16 23:23:43 crc kubenswrapper[4865]: E0216 23:23:43.364640 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de77d0a7-2fdd-48d9-a2ba-827deafc0437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364663 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="de77d0a7-2fdd-48d9-a2ba-827deafc0437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 23:23:43 crc kubenswrapper[4865]: E0216 23:23:43.364686 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="extract-content" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364695 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="extract-content" Feb 16 23:23:43 crc kubenswrapper[4865]: E0216 23:23:43.364712 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="extract-utilities" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364720 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="extract-utilities" Feb 16 23:23:43 crc kubenswrapper[4865]: E0216 23:23:43.364736 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="registry-server" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364743 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="registry-server" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.364985 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="de77d0a7-2fdd-48d9-a2ba-827deafc0437" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.365017 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d056ca47-40dd-4621-9ef3-07a982441303" containerName="registry-server" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.365701 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.377550 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.377941 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.378139 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.378353 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.378651 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.381682 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.381988 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.387650 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd"] Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.549722 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.549869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.549918 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.549956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.549991 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.550118 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.550182 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s2d\" (UniqueName: \"kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.550314 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.550459 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.551347 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.551775 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s2d\" (UniqueName: \"kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660704 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660943 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.660998 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.661105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.661150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.661199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.661240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.661400 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.669400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.669932 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.670628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.671550 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.671744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.675240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.675413 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.675569 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.677255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.680004 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.682623 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s2d\" (UniqueName: \"kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wd8vd\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:43 crc kubenswrapper[4865]: I0216 23:23:43.703546 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:23:44 crc kubenswrapper[4865]: I0216 23:23:44.264981 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd"] Feb 16 23:23:44 crc kubenswrapper[4865]: W0216 23:23:44.269964 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14a00f0e_5a36_481b_a8ad_78032cfa0616.slice/crio-7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3 WatchSource:0}: Error finding container 7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3: Status 404 returned error can't find the container with id 7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3 Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.274642 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" event={"ID":"14a00f0e-5a36-481b-a8ad-78032cfa0616","Type":"ContainerStarted","Data":"a218c8958a4a0e22d4491fdcb4d68311e1c1056601533d57dca310190ab617dd"} Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.274962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" event={"ID":"14a00f0e-5a36-481b-a8ad-78032cfa0616","Type":"ContainerStarted","Data":"7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3"} Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.315597 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" podStartSLOduration=1.839992671 podStartE2EDuration="2.315572699s" podCreationTimestamp="2026-02-16 23:23:43 +0000 UTC" firstStartedPulling="2026-02-16 23:23:44.280555747 +0000 UTC m=+2264.604262738" lastFinishedPulling="2026-02-16 23:23:44.756135805 +0000 UTC m=+2265.079842766" observedRunningTime="2026-02-16 23:23:45.305713099 +0000 UTC m=+2265.629420100" watchObservedRunningTime="2026-02-16 23:23:45.315572699 +0000 UTC m=+2265.639279690" Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.664252 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.664361 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.664461 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.665523 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:23:45 crc kubenswrapper[4865]: I0216 23:23:45.665654 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" gracePeriod=600 Feb 16 23:23:45 crc kubenswrapper[4865]: E0216 23:23:45.794029 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:23:46 crc kubenswrapper[4865]: I0216 23:23:46.291481 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" exitCode=0 Feb 16 23:23:46 crc kubenswrapper[4865]: I0216 23:23:46.291547 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd"} Feb 16 23:23:46 crc kubenswrapper[4865]: I0216 23:23:46.291633 4865 scope.go:117] "RemoveContainer" containerID="cfc7e0a224027a3ada4639b28cde285b197c36a977cd8811f5bce491cbea6a59" Feb 16 23:23:46 crc kubenswrapper[4865]: I0216 23:23:46.292834 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:23:46 crc kubenswrapper[4865]: E0216 23:23:46.293471 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:24:01 crc kubenswrapper[4865]: I0216 23:24:01.414614 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:24:01 crc kubenswrapper[4865]: E0216 23:24:01.415643 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:24:14 crc kubenswrapper[4865]: I0216 23:24:14.414982 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:24:14 crc kubenswrapper[4865]: E0216 23:24:14.415914 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:24:25 crc kubenswrapper[4865]: I0216 23:24:25.414964 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:24:25 crc kubenswrapper[4865]: E0216 23:24:25.415749 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:24:40 crc kubenswrapper[4865]: I0216 23:24:40.427506 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:24:40 crc kubenswrapper[4865]: E0216 23:24:40.432111 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:24:52 crc kubenswrapper[4865]: I0216 23:24:52.415962 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:24:52 crc kubenswrapper[4865]: E0216 23:24:52.417216 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:25:03 crc kubenswrapper[4865]: I0216 23:25:03.414800 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:25:03 crc kubenswrapper[4865]: E0216 23:25:03.415861 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:25:15 crc kubenswrapper[4865]: I0216 23:25:15.414785 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:25:15 crc kubenswrapper[4865]: E0216 23:25:15.415895 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:25:26 crc kubenswrapper[4865]: I0216 23:25:26.415139 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:25:26 crc kubenswrapper[4865]: E0216 23:25:26.417163 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:25:38 crc kubenswrapper[4865]: I0216 23:25:38.416214 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:25:38 crc kubenswrapper[4865]: E0216 23:25:38.417193 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:25:50 crc kubenswrapper[4865]: I0216 23:25:50.435582 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:25:50 crc kubenswrapper[4865]: E0216 23:25:50.438686 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:26:02 crc kubenswrapper[4865]: I0216 23:26:02.414970 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:26:02 crc kubenswrapper[4865]: E0216 23:26:02.416635 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:26:08 crc kubenswrapper[4865]: I0216 23:26:08.787215 4865 generic.go:334] "Generic (PLEG): container finished" podID="14a00f0e-5a36-481b-a8ad-78032cfa0616" containerID="a218c8958a4a0e22d4491fdcb4d68311e1c1056601533d57dca310190ab617dd" exitCode=0 Feb 16 23:26:08 crc kubenswrapper[4865]: I0216 23:26:08.787346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" event={"ID":"14a00f0e-5a36-481b-a8ad-78032cfa0616","Type":"ContainerDied","Data":"a218c8958a4a0e22d4491fdcb4d68311e1c1056601533d57dca310190ab617dd"} Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.243908 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.372678 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.372777 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.372806 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.372857 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.372937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6s2d\" (UniqueName: \"kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373044 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373078 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373105 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373151 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373179 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.373228 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam\") pod \"14a00f0e-5a36-481b-a8ad-78032cfa0616\" (UID: \"14a00f0e-5a36-481b-a8ad-78032cfa0616\") " Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.399208 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.401530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d" (OuterVolumeSpecName: "kube-api-access-z6s2d") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "kube-api-access-z6s2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.453512 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.475951 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.475987 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.476000 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6s2d\" (UniqueName: \"kubernetes.io/projected/14a00f0e-5a36-481b-a8ad-78032cfa0616-kube-api-access-z6s2d\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.476712 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory" (OuterVolumeSpecName: "inventory") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.484139 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.486981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.494644 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.498292 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.506733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.511078 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.513236 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "14a00f0e-5a36-481b-a8ad-78032cfa0616" (UID: "14a00f0e-5a36-481b-a8ad-78032cfa0616"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577687 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577743 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577754 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577764 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577792 4865 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577801 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577810 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.577819 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/14a00f0e-5a36-481b-a8ad-78032cfa0616-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.807564 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" event={"ID":"14a00f0e-5a36-481b-a8ad-78032cfa0616","Type":"ContainerDied","Data":"7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3"} Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.807600 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9a048bc176dfa3cba7c632e628913ddbb789d7c0148f3d1a96c05fe156c1b3" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.807621 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wd8vd" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.927264 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj"] Feb 16 23:26:10 crc kubenswrapper[4865]: E0216 23:26:10.927652 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a00f0e-5a36-481b-a8ad-78032cfa0616" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.927669 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a00f0e-5a36-481b-a8ad-78032cfa0616" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.927850 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a00f0e-5a36-481b-a8ad-78032cfa0616" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.928489 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.932218 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.935453 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.936733 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.937721 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rpk98" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.937724 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 16 23:26:10 crc kubenswrapper[4865]: I0216 23:26:10.950883 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj"] Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088509 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088628 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088692 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088716 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088774 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.088833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8fh\" (UniqueName: \"kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190433 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190559 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190647 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.190688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8fh\" (UniqueName: \"kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.196005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.196019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.196134 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.197019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.198088 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.204917 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.208045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8fh\" (UniqueName: \"kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.246218 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.856866 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:26:11 crc kubenswrapper[4865]: I0216 23:26:11.870452 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj"] Feb 16 23:26:12 crc kubenswrapper[4865]: I0216 23:26:12.823683 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" event={"ID":"56a9e58a-8161-4d27-96d4-1459ec03b3ed","Type":"ContainerStarted","Data":"f3f2fe457ce18bc9aa3a6463dbbd95ccecf375b0e822a9b1b2afbffdbadc47ef"} Feb 16 23:26:12 crc kubenswrapper[4865]: I0216 23:26:12.824011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" event={"ID":"56a9e58a-8161-4d27-96d4-1459ec03b3ed","Type":"ContainerStarted","Data":"2d689bd4d6df9d3c592274bee3254712ce6d650b1cc1384930222fc035088993"} Feb 16 23:26:16 crc kubenswrapper[4865]: I0216 23:26:16.415025 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:26:16 crc kubenswrapper[4865]: E0216 23:26:16.415889 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:26:30 crc kubenswrapper[4865]: I0216 23:26:30.425213 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:26:30 crc kubenswrapper[4865]: E0216 23:26:30.426324 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:26:42 crc kubenswrapper[4865]: I0216 23:26:42.414353 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:26:42 crc kubenswrapper[4865]: E0216 23:26:42.415127 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:26:57 crc kubenswrapper[4865]: I0216 23:26:57.414569 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:26:57 crc kubenswrapper[4865]: E0216 23:26:57.415476 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:27:10 crc kubenswrapper[4865]: I0216 23:27:10.426422 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:27:10 crc kubenswrapper[4865]: E0216 23:27:10.427682 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:27:23 crc kubenswrapper[4865]: I0216 23:27:23.415861 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:27:23 crc kubenswrapper[4865]: E0216 23:27:23.417178 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:27:34 crc kubenswrapper[4865]: I0216 23:27:34.415581 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:27:34 crc kubenswrapper[4865]: E0216 23:27:34.416799 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:27:48 crc kubenswrapper[4865]: I0216 23:27:48.414651 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:27:48 crc kubenswrapper[4865]: E0216 23:27:48.415537 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:28:02 crc kubenswrapper[4865]: I0216 23:28:02.414834 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:28:02 crc kubenswrapper[4865]: E0216 23:28:02.415966 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:28:17 crc kubenswrapper[4865]: I0216 23:28:17.413973 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:28:17 crc kubenswrapper[4865]: E0216 23:28:17.414733 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.778648 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" podStartSLOduration=134.232227053 podStartE2EDuration="2m14.778622096s" podCreationTimestamp="2026-02-16 23:26:10 +0000 UTC" firstStartedPulling="2026-02-16 23:26:11.856546245 +0000 UTC m=+2412.180253206" lastFinishedPulling="2026-02-16 23:26:12.402941278 +0000 UTC m=+2412.726648249" observedRunningTime="2026-02-16 23:26:12.849445171 +0000 UTC m=+2413.173152132" watchObservedRunningTime="2026-02-16 23:28:24.778622096 +0000 UTC m=+2545.102329097" Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.828781 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.837818 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.853952 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.958160 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.958389 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr8w\" (UniqueName: \"kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:24 crc kubenswrapper[4865]: I0216 23:28:24.958753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.060353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.060434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr8w\" (UniqueName: \"kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.060525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.060870 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.060915 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.085269 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr8w\" (UniqueName: \"kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w\") pod \"redhat-operators-b8lkg\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.174249 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:25 crc kubenswrapper[4865]: I0216 23:28:25.719393 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:26 crc kubenswrapper[4865]: I0216 23:28:26.199151 4865 generic.go:334] "Generic (PLEG): container finished" podID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerID="e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56" exitCode=0 Feb 16 23:28:26 crc kubenswrapper[4865]: I0216 23:28:26.199291 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerDied","Data":"e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56"} Feb 16 23:28:26 crc kubenswrapper[4865]: I0216 23:28:26.199493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerStarted","Data":"ac9b2b4e4e172164fb8873e7b22f4fa4eb3a91203988f6294e8ad5dfdab92111"} Feb 16 23:28:27 crc kubenswrapper[4865]: I0216 23:28:27.215805 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerStarted","Data":"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461"} Feb 16 23:28:28 crc kubenswrapper[4865]: I0216 23:28:28.238544 4865 generic.go:334] "Generic (PLEG): container finished" podID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerID="40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461" exitCode=0 Feb 16 23:28:28 crc kubenswrapper[4865]: I0216 23:28:28.239088 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerDied","Data":"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461"} Feb 16 23:28:29 crc kubenswrapper[4865]: I0216 23:28:29.252308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerStarted","Data":"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265"} Feb 16 23:28:29 crc kubenswrapper[4865]: I0216 23:28:29.294227 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8lkg" podStartSLOduration=2.848852362 podStartE2EDuration="5.294200996s" podCreationTimestamp="2026-02-16 23:28:24 +0000 UTC" firstStartedPulling="2026-02-16 23:28:26.203168374 +0000 UTC m=+2546.526875335" lastFinishedPulling="2026-02-16 23:28:28.648516998 +0000 UTC m=+2548.972223969" observedRunningTime="2026-02-16 23:28:29.272927593 +0000 UTC m=+2549.596634584" watchObservedRunningTime="2026-02-16 23:28:29.294200996 +0000 UTC m=+2549.617907967" Feb 16 23:28:32 crc kubenswrapper[4865]: I0216 23:28:32.415208 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:28:32 crc kubenswrapper[4865]: E0216 23:28:32.415980 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:28:35 crc kubenswrapper[4865]: I0216 23:28:35.174718 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:35 crc kubenswrapper[4865]: I0216 23:28:35.175181 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:36 crc kubenswrapper[4865]: I0216 23:28:36.228661 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b8lkg" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="registry-server" probeResult="failure" output=< Feb 16 23:28:36 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:28:36 crc kubenswrapper[4865]: > Feb 16 23:28:44 crc kubenswrapper[4865]: I0216 23:28:44.431648 4865 generic.go:334] "Generic (PLEG): container finished" podID="56a9e58a-8161-4d27-96d4-1459ec03b3ed" containerID="f3f2fe457ce18bc9aa3a6463dbbd95ccecf375b0e822a9b1b2afbffdbadc47ef" exitCode=0 Feb 16 23:28:44 crc kubenswrapper[4865]: I0216 23:28:44.438928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" event={"ID":"56a9e58a-8161-4d27-96d4-1459ec03b3ed","Type":"ContainerDied","Data":"f3f2fe457ce18bc9aa3a6463dbbd95ccecf375b0e822a9b1b2afbffdbadc47ef"} Feb 16 23:28:45 crc kubenswrapper[4865]: I0216 23:28:45.260307 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:45 crc kubenswrapper[4865]: I0216 23:28:45.352889 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:45 crc kubenswrapper[4865]: I0216 23:28:45.518734 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:45 crc kubenswrapper[4865]: I0216 23:28:45.960090 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.030434 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks8fh\" (UniqueName: \"kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031420 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031551 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031664 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031810 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.031939 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2\") pod \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\" (UID: \"56a9e58a-8161-4d27-96d4-1459ec03b3ed\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.036704 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh" (OuterVolumeSpecName: "kube-api-access-ks8fh") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "kube-api-access-ks8fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.036765 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.058190 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.062169 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.079942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory" (OuterVolumeSpecName: "inventory") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.082838 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.089405 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "56a9e58a-8161-4d27-96d4-1459ec03b3ed" (UID: "56a9e58a-8161-4d27-96d4-1459ec03b3ed"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135865 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks8fh\" (UniqueName: \"kubernetes.io/projected/56a9e58a-8161-4d27-96d4-1459ec03b3ed-kube-api-access-ks8fh\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135903 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135934 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-inventory\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135943 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135953 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135964 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.135973 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/56a9e58a-8161-4d27-96d4-1459ec03b3ed-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.464513 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" event={"ID":"56a9e58a-8161-4d27-96d4-1459ec03b3ed","Type":"ContainerDied","Data":"2d689bd4d6df9d3c592274bee3254712ce6d650b1cc1384930222fc035088993"} Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.464900 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d689bd4d6df9d3c592274bee3254712ce6d650b1cc1384930222fc035088993" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.464686 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b8lkg" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="registry-server" containerID="cri-o://cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265" gracePeriod=2 Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.464555 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.810402 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.948439 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content\") pod \"8712afee-32ae-410f-9b23-3b716fb8bc2d\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.948532 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cr8w\" (UniqueName: \"kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w\") pod \"8712afee-32ae-410f-9b23-3b716fb8bc2d\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.948707 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities\") pod \"8712afee-32ae-410f-9b23-3b716fb8bc2d\" (UID: \"8712afee-32ae-410f-9b23-3b716fb8bc2d\") " Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.950045 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities" (OuterVolumeSpecName: "utilities") pod "8712afee-32ae-410f-9b23-3b716fb8bc2d" (UID: "8712afee-32ae-410f-9b23-3b716fb8bc2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:28:46 crc kubenswrapper[4865]: I0216 23:28:46.954068 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w" (OuterVolumeSpecName: "kube-api-access-4cr8w") pod "8712afee-32ae-410f-9b23-3b716fb8bc2d" (UID: "8712afee-32ae-410f-9b23-3b716fb8bc2d"). InnerVolumeSpecName "kube-api-access-4cr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.050542 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.050581 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cr8w\" (UniqueName: \"kubernetes.io/projected/8712afee-32ae-410f-9b23-3b716fb8bc2d-kube-api-access-4cr8w\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.072303 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8712afee-32ae-410f-9b23-3b716fb8bc2d" (UID: "8712afee-32ae-410f-9b23-3b716fb8bc2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.151954 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8712afee-32ae-410f-9b23-3b716fb8bc2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.416352 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.477362 4865 generic.go:334] "Generic (PLEG): container finished" podID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerID="cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265" exitCode=0 Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.477406 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerDied","Data":"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265"} Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.477431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8lkg" event={"ID":"8712afee-32ae-410f-9b23-3b716fb8bc2d","Type":"ContainerDied","Data":"ac9b2b4e4e172164fb8873e7b22f4fa4eb3a91203988f6294e8ad5dfdab92111"} Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.477449 4865 scope.go:117] "RemoveContainer" containerID="cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.477596 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8lkg" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.522288 4865 scope.go:117] "RemoveContainer" containerID="40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.557358 4865 scope.go:117] "RemoveContainer" containerID="e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.570785 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.584706 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b8lkg"] Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.612526 4865 scope.go:117] "RemoveContainer" containerID="cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265" Feb 16 23:28:47 crc kubenswrapper[4865]: E0216 23:28:47.613062 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265\": container with ID starting with cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265 not found: ID does not exist" containerID="cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.613132 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265"} err="failed to get container status \"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265\": rpc error: code = NotFound desc = could not find container \"cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265\": container with ID starting with cfd01ba08acfa511c653666c5e122d86665b843fcf1170361e7e98fcf290a265 not found: ID does not exist" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.613176 4865 scope.go:117] "RemoveContainer" containerID="40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461" Feb 16 23:28:47 crc kubenswrapper[4865]: E0216 23:28:47.614837 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461\": container with ID starting with 40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461 not found: ID does not exist" containerID="40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.614886 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461"} err="failed to get container status \"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461\": rpc error: code = NotFound desc = could not find container \"40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461\": container with ID starting with 40e9550d79607be87f23f27d4b4d71d22dab84dc9e278e594e1a54ae51271461 not found: ID does not exist" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.614925 4865 scope.go:117] "RemoveContainer" containerID="e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56" Feb 16 23:28:47 crc kubenswrapper[4865]: E0216 23:28:47.615231 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56\": container with ID starting with e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56 not found: ID does not exist" containerID="e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56" Feb 16 23:28:47 crc kubenswrapper[4865]: I0216 23:28:47.615263 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56"} err="failed to get container status \"e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56\": rpc error: code = NotFound desc = could not find container \"e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56\": container with ID starting with e5b6688d9b0b1ed693496d5a76aac4996c41043079c4b36a0c95502b0af41e56 not found: ID does not exist" Feb 16 23:28:48 crc kubenswrapper[4865]: I0216 23:28:48.425113 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" path="/var/lib/kubelet/pods/8712afee-32ae-410f-9b23-3b716fb8bc2d/volumes" Feb 16 23:28:48 crc kubenswrapper[4865]: I0216 23:28:48.489265 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb"} Feb 16 23:29:17 crc kubenswrapper[4865]: I0216 23:29:17.240733 4865 patch_prober.go:28] interesting pod/route-controller-manager-5b8665584b-ll6f8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 23:29:17 crc kubenswrapper[4865]: I0216 23:29:17.241273 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" podUID="216bed02-0b05-4cef-b94b-c53b9c78a597" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 23:29:17 crc kubenswrapper[4865]: I0216 23:29:17.559803 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-pzb4b" podUID="bbc85b0c-aae5-4657-8c81-fed6b49e5d5d" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:29:17 crc kubenswrapper[4865]: I0216 23:29:17.561320 4865 patch_prober.go:28] interesting pod/route-controller-manager-5b8665584b-ll6f8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 23:29:17 crc kubenswrapper[4865]: I0216 23:29:17.561363 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5b8665584b-ll6f8" podUID="216bed02-0b05-4cef-b94b-c53b9c78a597" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.259479 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 23:29:40 crc kubenswrapper[4865]: E0216 23:29:40.262468 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="extract-content" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262494 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="extract-content" Feb 16 23:29:40 crc kubenswrapper[4865]: E0216 23:29:40.262551 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="registry-server" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262560 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="registry-server" Feb 16 23:29:40 crc kubenswrapper[4865]: E0216 23:29:40.262578 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a9e58a-8161-4d27-96d4-1459ec03b3ed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262587 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a9e58a-8161-4d27-96d4-1459ec03b3ed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 23:29:40 crc kubenswrapper[4865]: E0216 23:29:40.262601 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="extract-utilities" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262609 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="extract-utilities" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262866 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a9e58a-8161-4d27-96d4-1459ec03b3ed" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.262909 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8712afee-32ae-410f-9b23-3b716fb8bc2d" containerName="registry-server" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.263727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.267826 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.268042 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.268156 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.270103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9qvcb" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.296504 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.308169 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.308342 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.308429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409524 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409588 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409691 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409714 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409741 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409770 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhmf\" (UniqueName: \"kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409820 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.409865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.411096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.411914 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.421077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.514609 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lhmf\" (UniqueName: \"kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.515573 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.515628 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.515687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.515833 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.515892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.516197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.516263 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.517398 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.522941 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.527332 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.531987 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lhmf\" (UniqueName: \"kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.554187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " pod="openstack/tempest-tests-tempest" Feb 16 23:29:40 crc kubenswrapper[4865]: I0216 23:29:40.608311 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 23:29:41 crc kubenswrapper[4865]: I0216 23:29:41.061193 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 16 23:29:41 crc kubenswrapper[4865]: W0216 23:29:41.065593 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc785498_c658_47ed_8329_0e8c81c771be.slice/crio-5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee WatchSource:0}: Error finding container 5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee: Status 404 returned error can't find the container with id 5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee Feb 16 23:29:41 crc kubenswrapper[4865]: I0216 23:29:41.269175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc785498-c658-47ed-8329-0e8c81c771be","Type":"ContainerStarted","Data":"5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee"} Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.157325 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz"] Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.158968 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.161728 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.162840 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.173997 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz"] Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.256339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfgd\" (UniqueName: \"kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.256415 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.256521 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.358432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.358522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvfgd\" (UniqueName: \"kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.358561 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.359272 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.374402 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvfgd\" (UniqueName: \"kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.378350 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume\") pod \"collect-profiles-29521410-d42nz\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.502570 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:30:00 crc kubenswrapper[4865]: I0216 23:30:00.511092 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:10 crc kubenswrapper[4865]: E0216 23:30:10.980986 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 16 23:30:10 crc kubenswrapper[4865]: E0216 23:30:10.981550 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lhmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(dc785498-c658-47ed-8329-0e8c81c771be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 23:30:10 crc kubenswrapper[4865]: E0216 23:30:10.982796 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="dc785498-c658-47ed-8329-0e8c81c771be" Feb 16 23:30:11 crc kubenswrapper[4865]: I0216 23:30:11.427176 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz"] Feb 16 23:30:11 crc kubenswrapper[4865]: W0216 23:30:11.440908 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f83a31_4fda_48b7_a260_745c406b3ec1.slice/crio-80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8 WatchSource:0}: Error finding container 80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8: Status 404 returned error can't find the container with id 80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8 Feb 16 23:30:11 crc kubenswrapper[4865]: I0216 23:30:11.624700 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" event={"ID":"18f83a31-4fda-48b7-a260-745c406b3ec1","Type":"ContainerStarted","Data":"80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8"} Feb 16 23:30:11 crc kubenswrapper[4865]: E0216 23:30:11.626183 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="dc785498-c658-47ed-8329-0e8c81c771be" Feb 16 23:30:12 crc kubenswrapper[4865]: I0216 23:30:12.640803 4865 generic.go:334] "Generic (PLEG): container finished" podID="18f83a31-4fda-48b7-a260-745c406b3ec1" containerID="c7dd5b9de31536a5a38614bff02259f50077882a1ac2b92df82c5b9dbf1f98b8" exitCode=0 Feb 16 23:30:12 crc kubenswrapper[4865]: I0216 23:30:12.640908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" event={"ID":"18f83a31-4fda-48b7-a260-745c406b3ec1","Type":"ContainerDied","Data":"c7dd5b9de31536a5a38614bff02259f50077882a1ac2b92df82c5b9dbf1f98b8"} Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.073934 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.161360 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvfgd\" (UniqueName: \"kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd\") pod \"18f83a31-4fda-48b7-a260-745c406b3ec1\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.161424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume\") pod \"18f83a31-4fda-48b7-a260-745c406b3ec1\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.161622 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume\") pod \"18f83a31-4fda-48b7-a260-745c406b3ec1\" (UID: \"18f83a31-4fda-48b7-a260-745c406b3ec1\") " Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.162131 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume" (OuterVolumeSpecName: "config-volume") pod "18f83a31-4fda-48b7-a260-745c406b3ec1" (UID: "18f83a31-4fda-48b7-a260-745c406b3ec1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.167509 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd" (OuterVolumeSpecName: "kube-api-access-wvfgd") pod "18f83a31-4fda-48b7-a260-745c406b3ec1" (UID: "18f83a31-4fda-48b7-a260-745c406b3ec1"). InnerVolumeSpecName "kube-api-access-wvfgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.168922 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18f83a31-4fda-48b7-a260-745c406b3ec1" (UID: "18f83a31-4fda-48b7-a260-745c406b3ec1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.265208 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18f83a31-4fda-48b7-a260-745c406b3ec1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.265273 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvfgd\" (UniqueName: \"kubernetes.io/projected/18f83a31-4fda-48b7-a260-745c406b3ec1-kube-api-access-wvfgd\") on node \"crc\" DevicePath \"\"" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.265331 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18f83a31-4fda-48b7-a260-745c406b3ec1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.665535 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" event={"ID":"18f83a31-4fda-48b7-a260-745c406b3ec1","Type":"ContainerDied","Data":"80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8"} Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.665891 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c6112b12637e8ab1d42b21171da13d50e4d23ea014afb935136466b402a3d8" Feb 16 23:30:14 crc kubenswrapper[4865]: I0216 23:30:14.665636 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521410-d42nz" Feb 16 23:30:15 crc kubenswrapper[4865]: I0216 23:30:15.172673 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x"] Feb 16 23:30:15 crc kubenswrapper[4865]: I0216 23:30:15.180841 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521365-bs68x"] Feb 16 23:30:16 crc kubenswrapper[4865]: I0216 23:30:16.434181 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ee4300-52d2-45bc-8420-9045db672f41" path="/var/lib/kubelet/pods/67ee4300-52d2-45bc-8420-9045db672f41/volumes" Feb 16 23:30:23 crc kubenswrapper[4865]: I0216 23:30:23.843309 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 16 23:30:26 crc kubenswrapper[4865]: I0216 23:30:26.077924 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc785498-c658-47ed-8329-0e8c81c771be","Type":"ContainerStarted","Data":"80149e643f12b9bc24d275350aba68b6cb9840ea31b2d1b121ab45472ec6fe0b"} Feb 16 23:30:26 crc kubenswrapper[4865]: I0216 23:30:26.119640 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.3480944919999995 podStartE2EDuration="48.119609749s" podCreationTimestamp="2026-02-16 23:29:38 +0000 UTC" firstStartedPulling="2026-02-16 23:29:41.068821684 +0000 UTC m=+2621.392528685" lastFinishedPulling="2026-02-16 23:30:23.840336971 +0000 UTC m=+2664.164043942" observedRunningTime="2026-02-16 23:30:26.104772469 +0000 UTC m=+2666.428479490" watchObservedRunningTime="2026-02-16 23:30:26.119609749 +0000 UTC m=+2666.443316740" Feb 16 23:31:06 crc kubenswrapper[4865]: I0216 23:31:06.058579 4865 scope.go:117] "RemoveContainer" containerID="2f5c2b0d5e1f838e460763ffdc452e422d33756be3b13ab53d3c90a10140441f" Feb 16 23:31:15 crc kubenswrapper[4865]: I0216 23:31:15.664313 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:31:15 crc kubenswrapper[4865]: I0216 23:31:15.664902 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:31:45 crc kubenswrapper[4865]: I0216 23:31:45.663775 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:31:45 crc kubenswrapper[4865]: I0216 23:31:45.664457 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:32:15 crc kubenswrapper[4865]: I0216 23:32:15.664140 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:32:15 crc kubenswrapper[4865]: I0216 23:32:15.664672 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:32:15 crc kubenswrapper[4865]: I0216 23:32:15.664719 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:32:15 crc kubenswrapper[4865]: I0216 23:32:15.665568 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:32:15 crc kubenswrapper[4865]: I0216 23:32:15.665635 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb" gracePeriod=600 Feb 16 23:32:16 crc kubenswrapper[4865]: I0216 23:32:16.723313 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb" exitCode=0 Feb 16 23:32:16 crc kubenswrapper[4865]: I0216 23:32:16.723379 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb"} Feb 16 23:32:16 crc kubenswrapper[4865]: I0216 23:32:16.723671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06"} Feb 16 23:32:16 crc kubenswrapper[4865]: I0216 23:32:16.723697 4865 scope.go:117] "RemoveContainer" containerID="8b34037901d5dc289c2068f9c264e98450cf58a634d07a2fbca10b53e42e31bd" Feb 16 23:34:45 crc kubenswrapper[4865]: I0216 23:34:45.664183 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:34:45 crc kubenswrapper[4865]: I0216 23:34:45.665904 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:35:15 crc kubenswrapper[4865]: I0216 23:35:15.664429 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:35:15 crc kubenswrapper[4865]: I0216 23:35:15.665147 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.664374 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.664941 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.665002 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.665856 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.665939 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" gracePeriod=600 Feb 16 23:35:45 crc kubenswrapper[4865]: E0216 23:35:45.790352 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.994586 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" exitCode=0 Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.994641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06"} Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.994688 4865 scope.go:117] "RemoveContainer" containerID="49b3b593ee13ce5cd3fd410ea233d627ec5352f22f0ea7e277f2a08582a5ceeb" Feb 16 23:35:45 crc kubenswrapper[4865]: I0216 23:35:45.995455 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:35:45 crc kubenswrapper[4865]: E0216 23:35:45.995785 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:36:00 crc kubenswrapper[4865]: I0216 23:36:00.417581 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:36:00 crc kubenswrapper[4865]: E0216 23:36:00.418882 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:36:15 crc kubenswrapper[4865]: I0216 23:36:15.415503 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:36:15 crc kubenswrapper[4865]: E0216 23:36:15.416179 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:36:26 crc kubenswrapper[4865]: I0216 23:36:26.415766 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:36:26 crc kubenswrapper[4865]: E0216 23:36:26.416819 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:36:41 crc kubenswrapper[4865]: I0216 23:36:41.414202 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:36:41 crc kubenswrapper[4865]: E0216 23:36:41.416076 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:36:52 crc kubenswrapper[4865]: I0216 23:36:52.415160 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:36:52 crc kubenswrapper[4865]: E0216 23:36:52.416237 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:37:03 crc kubenswrapper[4865]: I0216 23:37:03.414661 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:37:03 crc kubenswrapper[4865]: E0216 23:37:03.415521 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:37:18 crc kubenswrapper[4865]: I0216 23:37:18.414313 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:37:18 crc kubenswrapper[4865]: E0216 23:37:18.415108 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:37:29 crc kubenswrapper[4865]: I0216 23:37:29.414572 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:37:29 crc kubenswrapper[4865]: E0216 23:37:29.415221 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:37:43 crc kubenswrapper[4865]: I0216 23:37:43.414458 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:37:43 crc kubenswrapper[4865]: E0216 23:37:43.415523 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:37:54 crc kubenswrapper[4865]: I0216 23:37:54.415440 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:37:54 crc kubenswrapper[4865]: E0216 23:37:54.416533 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:06 crc kubenswrapper[4865]: I0216 23:38:06.415444 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:38:06 crc kubenswrapper[4865]: E0216 23:38:06.416390 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:19 crc kubenswrapper[4865]: I0216 23:38:19.414510 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:38:19 crc kubenswrapper[4865]: E0216 23:38:19.415313 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:32 crc kubenswrapper[4865]: I0216 23:38:32.415714 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:38:32 crc kubenswrapper[4865]: E0216 23:38:32.416877 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:34 crc kubenswrapper[4865]: I0216 23:38:34.955245 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:34 crc kubenswrapper[4865]: E0216 23:38:34.956491 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f83a31-4fda-48b7-a260-745c406b3ec1" containerName="collect-profiles" Feb 16 23:38:34 crc kubenswrapper[4865]: I0216 23:38:34.956520 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f83a31-4fda-48b7-a260-745c406b3ec1" containerName="collect-profiles" Feb 16 23:38:34 crc kubenswrapper[4865]: I0216 23:38:34.956885 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f83a31-4fda-48b7-a260-745c406b3ec1" containerName="collect-profiles" Feb 16 23:38:34 crc kubenswrapper[4865]: I0216 23:38:34.958953 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:34 crc kubenswrapper[4865]: I0216 23:38:34.988561 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.066514 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.066590 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwqt\" (UniqueName: \"kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.067104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.169018 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.169157 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.169195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwqt\" (UniqueName: \"kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.170337 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.170343 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.194912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwqt\" (UniqueName: \"kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt\") pod \"redhat-marketplace-n7w9k\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.309642 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.575037 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.579327 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.586698 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.680181 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.680236 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ldd\" (UniqueName: \"kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.680599 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.782232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.782306 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ldd\" (UniqueName: \"kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.782405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.782858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.783086 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.811268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ldd\" (UniqueName: \"kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd\") pod \"certified-operators-2n62l\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.907231 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:35 crc kubenswrapper[4865]: I0216 23:38:35.911761 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:36 crc kubenswrapper[4865]: I0216 23:38:36.106861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerStarted","Data":"f1e56183ee74e3f2c48ec1297d6fe80fa704082791ce068b4fce523664c84394"} Feb 16 23:38:36 crc kubenswrapper[4865]: I0216 23:38:36.470207 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.120488 4865 generic.go:334] "Generic (PLEG): container finished" podID="14681c95-5172-4788-b54a-f207c2beb1e9" containerID="a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d" exitCode=0 Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.120579 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerDied","Data":"a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d"} Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.123810 4865 generic.go:334] "Generic (PLEG): container finished" podID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerID="44475d967f7db3b7610941588cd12a842964f166b1403fa9f16a7d312a6493b3" exitCode=0 Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.123884 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerDied","Data":"44475d967f7db3b7610941588cd12a842964f166b1403fa9f16a7d312a6493b3"} Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.123930 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerStarted","Data":"16209a8234e85f3ae00dc111355847eb2300c0eb3fe9ebe3cb368cde2ddddc34"} Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.124109 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.355237 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.362911 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.398909 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.419065 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.419218 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.419302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcvc\" (UniqueName: \"kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.521534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.521646 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.521683 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcvc\" (UniqueName: \"kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.522317 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.522446 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.549980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcvc\" (UniqueName: \"kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc\") pod \"community-operators-nb54m\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.686380 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.956958 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:38:37 crc kubenswrapper[4865]: I0216 23:38:37.959335 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:37.999086 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.039065 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.039117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flgm\" (UniqueName: \"kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.039150 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.142372 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.142712 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5flgm\" (UniqueName: \"kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.142756 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.143539 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.143584 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.179536 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flgm\" (UniqueName: \"kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm\") pod \"redhat-operators-hlxh4\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.254893 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:38:38 crc kubenswrapper[4865]: W0216 23:38:38.264520 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74022cbf_c009_4efd_bc26_fb7b370f0dca.slice/crio-1d83ede861f895781f4251c988d07e8f3296a1c5e234a7afdefc5038d89c195a WatchSource:0}: Error finding container 1d83ede861f895781f4251c988d07e8f3296a1c5e234a7afdefc5038d89c195a: Status 404 returned error can't find the container with id 1d83ede861f895781f4251c988d07e8f3296a1c5e234a7afdefc5038d89c195a Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.355805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:38 crc kubenswrapper[4865]: I0216 23:38:38.969649 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:38:38 crc kubenswrapper[4865]: W0216 23:38:38.992513 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2532d65_e9f2_4448_9b52_6927d5013b85.slice/crio-3a62e64446c83236fa8d0660d5c4ec56d91846e5d9183faa709081dc02183b52 WatchSource:0}: Error finding container 3a62e64446c83236fa8d0660d5c4ec56d91846e5d9183faa709081dc02183b52: Status 404 returned error can't find the container with id 3a62e64446c83236fa8d0660d5c4ec56d91846e5d9183faa709081dc02183b52 Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.158391 4865 generic.go:334] "Generic (PLEG): container finished" podID="14681c95-5172-4788-b54a-f207c2beb1e9" containerID="4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b" exitCode=0 Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.158494 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerDied","Data":"4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b"} Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.162590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerStarted","Data":"e31147d0e8f97b64e679540d6a8d3a955fe3d0b93721906f9679513f3401a711"} Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.166112 4865 generic.go:334] "Generic (PLEG): container finished" podID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerID="bb66492d099b7da78fcf5ab0e9fb7206a6dea6f6699ad2c8b23493b5523923d4" exitCode=0 Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.166182 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerDied","Data":"bb66492d099b7da78fcf5ab0e9fb7206a6dea6f6699ad2c8b23493b5523923d4"} Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.166211 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerStarted","Data":"1d83ede861f895781f4251c988d07e8f3296a1c5e234a7afdefc5038d89c195a"} Feb 16 23:38:39 crc kubenswrapper[4865]: I0216 23:38:39.168571 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerStarted","Data":"3a62e64446c83236fa8d0660d5c4ec56d91846e5d9183faa709081dc02183b52"} Feb 16 23:38:40 crc kubenswrapper[4865]: I0216 23:38:40.178090 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerID="2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22" exitCode=0 Feb 16 23:38:40 crc kubenswrapper[4865]: I0216 23:38:40.178268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerDied","Data":"2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22"} Feb 16 23:38:40 crc kubenswrapper[4865]: I0216 23:38:40.182955 4865 generic.go:334] "Generic (PLEG): container finished" podID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerID="e31147d0e8f97b64e679540d6a8d3a955fe3d0b93721906f9679513f3401a711" exitCode=0 Feb 16 23:38:40 crc kubenswrapper[4865]: I0216 23:38:40.183110 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerDied","Data":"e31147d0e8f97b64e679540d6a8d3a955fe3d0b93721906f9679513f3401a711"} Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.200473 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerStarted","Data":"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d"} Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.205075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerStarted","Data":"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff"} Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.207918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerStarted","Data":"48edda7a6fca4c274059a712cab37c9a5b61122b0f7272b5270ece557ffd3f89"} Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.209612 4865 generic.go:334] "Generic (PLEG): container finished" podID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerID="2a33eecdb8c9ba4539a58cf24032f17d68079317241a7894e06f47acdb61cea0" exitCode=0 Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.209631 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerDied","Data":"2a33eecdb8c9ba4539a58cf24032f17d68079317241a7894e06f47acdb61cea0"} Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.259215 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2n62l" podStartSLOduration=2.560056707 podStartE2EDuration="6.259195098s" podCreationTimestamp="2026-02-16 23:38:35 +0000 UTC" firstStartedPulling="2026-02-16 23:38:37.126191902 +0000 UTC m=+3157.449898873" lastFinishedPulling="2026-02-16 23:38:40.825330303 +0000 UTC m=+3161.149037264" observedRunningTime="2026-02-16 23:38:41.253083266 +0000 UTC m=+3161.576790237" watchObservedRunningTime="2026-02-16 23:38:41.259195098 +0000 UTC m=+3161.582902059" Feb 16 23:38:41 crc kubenswrapper[4865]: I0216 23:38:41.283561 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n7w9k" podStartSLOduration=4.464567602 podStartE2EDuration="7.283540213s" podCreationTimestamp="2026-02-16 23:38:34 +0000 UTC" firstStartedPulling="2026-02-16 23:38:37.123698242 +0000 UTC m=+3157.447405243" lastFinishedPulling="2026-02-16 23:38:39.942670883 +0000 UTC m=+3160.266377854" observedRunningTime="2026-02-16 23:38:41.274799157 +0000 UTC m=+3161.598506158" watchObservedRunningTime="2026-02-16 23:38:41.283540213 +0000 UTC m=+3161.607247184" Feb 16 23:38:42 crc kubenswrapper[4865]: I0216 23:38:42.231361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerStarted","Data":"3abf869d334bcaea1841c0a5f529a759520797289e5449a441474e80ac9aa37a"} Feb 16 23:38:42 crc kubenswrapper[4865]: I0216 23:38:42.257246 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nb54m" podStartSLOduration=2.598825291 podStartE2EDuration="5.257228735s" podCreationTimestamp="2026-02-16 23:38:37 +0000 UTC" firstStartedPulling="2026-02-16 23:38:39.168976077 +0000 UTC m=+3159.492683039" lastFinishedPulling="2026-02-16 23:38:41.827379482 +0000 UTC m=+3162.151086483" observedRunningTime="2026-02-16 23:38:42.249299692 +0000 UTC m=+3162.573006663" watchObservedRunningTime="2026-02-16 23:38:42.257228735 +0000 UTC m=+3162.580935696" Feb 16 23:38:44 crc kubenswrapper[4865]: I0216 23:38:44.253991 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerID="f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d" exitCode=0 Feb 16 23:38:44 crc kubenswrapper[4865]: I0216 23:38:44.254095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerDied","Data":"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d"} Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.263543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerStarted","Data":"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7"} Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.298164 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hlxh4" podStartSLOduration=3.722073838 podStartE2EDuration="8.298143629s" podCreationTimestamp="2026-02-16 23:38:37 +0000 UTC" firstStartedPulling="2026-02-16 23:38:40.180173784 +0000 UTC m=+3160.503880745" lastFinishedPulling="2026-02-16 23:38:44.756243545 +0000 UTC m=+3165.079950536" observedRunningTime="2026-02-16 23:38:45.282104068 +0000 UTC m=+3165.605811039" watchObservedRunningTime="2026-02-16 23:38:45.298143629 +0000 UTC m=+3165.621850610" Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.310265 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.311568 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.908169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:45 crc kubenswrapper[4865]: I0216 23:38:45.908237 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:46 crc kubenswrapper[4865]: I0216 23:38:46.362681 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n7w9k" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="registry-server" probeResult="failure" output=< Feb 16 23:38:46 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:38:46 crc kubenswrapper[4865]: > Feb 16 23:38:46 crc kubenswrapper[4865]: I0216 23:38:46.414861 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:38:46 crc kubenswrapper[4865]: E0216 23:38:46.415104 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:46 crc kubenswrapper[4865]: I0216 23:38:46.960302 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2n62l" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="registry-server" probeResult="failure" output=< Feb 16 23:38:46 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:38:46 crc kubenswrapper[4865]: > Feb 16 23:38:47 crc kubenswrapper[4865]: I0216 23:38:47.687034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:47 crc kubenswrapper[4865]: I0216 23:38:47.687922 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:48 crc kubenswrapper[4865]: I0216 23:38:48.357010 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:48 crc kubenswrapper[4865]: I0216 23:38:48.357062 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:38:48 crc kubenswrapper[4865]: I0216 23:38:48.743748 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nb54m" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="registry-server" probeResult="failure" output=< Feb 16 23:38:48 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:38:48 crc kubenswrapper[4865]: > Feb 16 23:38:49 crc kubenswrapper[4865]: I0216 23:38:49.420967 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hlxh4" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" probeResult="failure" output=< Feb 16 23:38:49 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:38:49 crc kubenswrapper[4865]: > Feb 16 23:38:55 crc kubenswrapper[4865]: I0216 23:38:55.383654 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:55 crc kubenswrapper[4865]: I0216 23:38:55.474048 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:55 crc kubenswrapper[4865]: I0216 23:38:55.641459 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:55 crc kubenswrapper[4865]: I0216 23:38:55.969486 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:56 crc kubenswrapper[4865]: I0216 23:38:56.051938 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.396097 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n7w9k" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="registry-server" containerID="cri-o://dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff" gracePeriod=2 Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.758787 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.841037 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.911049 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.923463 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities\") pod \"14681c95-5172-4788-b54a-f207c2beb1e9\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.923577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content\") pod \"14681c95-5172-4788-b54a-f207c2beb1e9\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.927464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities" (OuterVolumeSpecName: "utilities") pod "14681c95-5172-4788-b54a-f207c2beb1e9" (UID: "14681c95-5172-4788-b54a-f207c2beb1e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:38:57 crc kubenswrapper[4865]: I0216 23:38:57.946558 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14681c95-5172-4788-b54a-f207c2beb1e9" (UID: "14681c95-5172-4788-b54a-f207c2beb1e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.024791 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktwqt\" (UniqueName: \"kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt\") pod \"14681c95-5172-4788-b54a-f207c2beb1e9\" (UID: \"14681c95-5172-4788-b54a-f207c2beb1e9\") " Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.025131 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.025146 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14681c95-5172-4788-b54a-f207c2beb1e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.031574 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt" (OuterVolumeSpecName: "kube-api-access-ktwqt") pod "14681c95-5172-4788-b54a-f207c2beb1e9" (UID: "14681c95-5172-4788-b54a-f207c2beb1e9"). InnerVolumeSpecName "kube-api-access-ktwqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.126490 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktwqt\" (UniqueName: \"kubernetes.io/projected/14681c95-5172-4788-b54a-f207c2beb1e9-kube-api-access-ktwqt\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.247500 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.248170 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2n62l" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="registry-server" containerID="cri-o://48edda7a6fca4c274059a712cab37c9a5b61122b0f7272b5270ece557ffd3f89" gracePeriod=2 Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.408664 4865 generic.go:334] "Generic (PLEG): container finished" podID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerID="48edda7a6fca4c274059a712cab37c9a5b61122b0f7272b5270ece557ffd3f89" exitCode=0 Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.408738 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerDied","Data":"48edda7a6fca4c274059a712cab37c9a5b61122b0f7272b5270ece557ffd3f89"} Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.412153 4865 generic.go:334] "Generic (PLEG): container finished" podID="14681c95-5172-4788-b54a-f207c2beb1e9" containerID="dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff" exitCode=0 Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.412213 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n7w9k" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.412256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerDied","Data":"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff"} Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.412333 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n7w9k" event={"ID":"14681c95-5172-4788-b54a-f207c2beb1e9","Type":"ContainerDied","Data":"f1e56183ee74e3f2c48ec1297d6fe80fa704082791ce068b4fce523664c84394"} Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.412368 4865 scope.go:117] "RemoveContainer" containerID="dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.414639 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:38:58 crc kubenswrapper[4865]: E0216 23:38:58.415093 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.442835 4865 scope.go:117] "RemoveContainer" containerID="4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.461478 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.470070 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n7w9k"] Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.470177 4865 scope.go:117] "RemoveContainer" containerID="a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.496527 4865 scope.go:117] "RemoveContainer" containerID="dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff" Feb 16 23:38:58 crc kubenswrapper[4865]: E0216 23:38:58.497133 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff\": container with ID starting with dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff not found: ID does not exist" containerID="dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.497184 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff"} err="failed to get container status \"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff\": rpc error: code = NotFound desc = could not find container \"dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff\": container with ID starting with dfd802315012b9745262675fea4aee7ff33ffe771e74613b3e307bbe2e1f70ff not found: ID does not exist" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.497215 4865 scope.go:117] "RemoveContainer" containerID="4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b" Feb 16 23:38:58 crc kubenswrapper[4865]: E0216 23:38:58.497661 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b\": container with ID starting with 4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b not found: ID does not exist" containerID="4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.497698 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b"} err="failed to get container status \"4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b\": rpc error: code = NotFound desc = could not find container \"4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b\": container with ID starting with 4bfaaeaed49de5c378eecbf3cdbd52baa1e7fd6625ff92f11a55e2248ca92e5b not found: ID does not exist" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.497717 4865 scope.go:117] "RemoveContainer" containerID="a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d" Feb 16 23:38:58 crc kubenswrapper[4865]: E0216 23:38:58.498088 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d\": container with ID starting with a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d not found: ID does not exist" containerID="a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.498115 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d"} err="failed to get container status \"a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d\": rpc error: code = NotFound desc = could not find container \"a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d\": container with ID starting with a1c8f830fe4bb8de9372ec1e4d10434bd5bc046fc80a3939cfaef888203e527d not found: ID does not exist" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.749472 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.839698 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities\") pod \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.840052 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content\") pod \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.840354 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72ldd\" (UniqueName: \"kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd\") pod \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\" (UID: \"faeb82b7-bf9e-4cab-8d7f-9740b68e0826\") " Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.840895 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities" (OuterVolumeSpecName: "utilities") pod "faeb82b7-bf9e-4cab-8d7f-9740b68e0826" (UID: "faeb82b7-bf9e-4cab-8d7f-9740b68e0826"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.845999 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd" (OuterVolumeSpecName: "kube-api-access-72ldd") pod "faeb82b7-bf9e-4cab-8d7f-9740b68e0826" (UID: "faeb82b7-bf9e-4cab-8d7f-9740b68e0826"). InnerVolumeSpecName "kube-api-access-72ldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.894446 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faeb82b7-bf9e-4cab-8d7f-9740b68e0826" (UID: "faeb82b7-bf9e-4cab-8d7f-9740b68e0826"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.942338 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72ldd\" (UniqueName: \"kubernetes.io/projected/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-kube-api-access-72ldd\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.942373 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:58 crc kubenswrapper[4865]: I0216 23:38:58.942382 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faeb82b7-bf9e-4cab-8d7f-9740b68e0826-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.426469 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hlxh4" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" probeResult="failure" output=< Feb 16 23:38:59 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:38:59 crc kubenswrapper[4865]: > Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.433309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n62l" event={"ID":"faeb82b7-bf9e-4cab-8d7f-9740b68e0826","Type":"ContainerDied","Data":"16209a8234e85f3ae00dc111355847eb2300c0eb3fe9ebe3cb368cde2ddddc34"} Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.433503 4865 scope.go:117] "RemoveContainer" containerID="48edda7a6fca4c274059a712cab37c9a5b61122b0f7272b5270ece557ffd3f89" Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.433603 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n62l" Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.456827 4865 scope.go:117] "RemoveContainer" containerID="e31147d0e8f97b64e679540d6a8d3a955fe3d0b93721906f9679513f3401a711" Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.486605 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.496417 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2n62l"] Feb 16 23:38:59 crc kubenswrapper[4865]: I0216 23:38:59.496544 4865 scope.go:117] "RemoveContainer" containerID="44475d967f7db3b7610941588cd12a842964f166b1403fa9f16a7d312a6493b3" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.050796 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.051718 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nb54m" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="registry-server" containerID="cri-o://3abf869d334bcaea1841c0a5f529a759520797289e5449a441474e80ac9aa37a" gracePeriod=2 Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.427232 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" path="/var/lib/kubelet/pods/14681c95-5172-4788-b54a-f207c2beb1e9/volumes" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.428439 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" path="/var/lib/kubelet/pods/faeb82b7-bf9e-4cab-8d7f-9740b68e0826/volumes" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.455404 4865 generic.go:334] "Generic (PLEG): container finished" podID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerID="3abf869d334bcaea1841c0a5f529a759520797289e5449a441474e80ac9aa37a" exitCode=0 Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.455501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerDied","Data":"3abf869d334bcaea1841c0a5f529a759520797289e5449a441474e80ac9aa37a"} Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.565807 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.576349 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content\") pod \"74022cbf-c009-4efd-bc26-fb7b370f0dca\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.576490 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities\") pod \"74022cbf-c009-4efd-bc26-fb7b370f0dca\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.576557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcvc\" (UniqueName: \"kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc\") pod \"74022cbf-c009-4efd-bc26-fb7b370f0dca\" (UID: \"74022cbf-c009-4efd-bc26-fb7b370f0dca\") " Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.577996 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities" (OuterVolumeSpecName: "utilities") pod "74022cbf-c009-4efd-bc26-fb7b370f0dca" (UID: "74022cbf-c009-4efd-bc26-fb7b370f0dca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.581406 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc" (OuterVolumeSpecName: "kube-api-access-kjcvc") pod "74022cbf-c009-4efd-bc26-fb7b370f0dca" (UID: "74022cbf-c009-4efd-bc26-fb7b370f0dca"). InnerVolumeSpecName "kube-api-access-kjcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.643015 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74022cbf-c009-4efd-bc26-fb7b370f0dca" (UID: "74022cbf-c009-4efd-bc26-fb7b370f0dca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.679479 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcvc\" (UniqueName: \"kubernetes.io/projected/74022cbf-c009-4efd-bc26-fb7b370f0dca-kube-api-access-kjcvc\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.679552 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:00 crc kubenswrapper[4865]: I0216 23:39:00.679574 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74022cbf-c009-4efd-bc26-fb7b370f0dca-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.469840 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb54m" event={"ID":"74022cbf-c009-4efd-bc26-fb7b370f0dca","Type":"ContainerDied","Data":"1d83ede861f895781f4251c988d07e8f3296a1c5e234a7afdefc5038d89c195a"} Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.470230 4865 scope.go:117] "RemoveContainer" containerID="3abf869d334bcaea1841c0a5f529a759520797289e5449a441474e80ac9aa37a" Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.470006 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb54m" Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.494308 4865 scope.go:117] "RemoveContainer" containerID="2a33eecdb8c9ba4539a58cf24032f17d68079317241a7894e06f47acdb61cea0" Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.536394 4865 scope.go:117] "RemoveContainer" containerID="bb66492d099b7da78fcf5ab0e9fb7206a6dea6f6699ad2c8b23493b5523923d4" Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.539556 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:39:01 crc kubenswrapper[4865]: I0216 23:39:01.551492 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nb54m"] Feb 16 23:39:02 crc kubenswrapper[4865]: I0216 23:39:02.430782 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" path="/var/lib/kubelet/pods/74022cbf-c009-4efd-bc26-fb7b370f0dca/volumes" Feb 16 23:39:09 crc kubenswrapper[4865]: I0216 23:39:09.400131 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hlxh4" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" probeResult="failure" output=< Feb 16 23:39:09 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:39:09 crc kubenswrapper[4865]: > Feb 16 23:39:11 crc kubenswrapper[4865]: I0216 23:39:11.415210 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:39:11 crc kubenswrapper[4865]: E0216 23:39:11.416029 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:39:18 crc kubenswrapper[4865]: I0216 23:39:18.429034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:39:18 crc kubenswrapper[4865]: I0216 23:39:18.485861 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:39:18 crc kubenswrapper[4865]: I0216 23:39:18.672511 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:39:19 crc kubenswrapper[4865]: I0216 23:39:19.662323 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hlxh4" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" containerID="cri-o://f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7" gracePeriod=2 Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.228346 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.343185 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content\") pod \"b2532d65-e9f2-4448-9b52-6927d5013b85\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.343263 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5flgm\" (UniqueName: \"kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm\") pod \"b2532d65-e9f2-4448-9b52-6927d5013b85\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.343313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities\") pod \"b2532d65-e9f2-4448-9b52-6927d5013b85\" (UID: \"b2532d65-e9f2-4448-9b52-6927d5013b85\") " Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.344083 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities" (OuterVolumeSpecName: "utilities") pod "b2532d65-e9f2-4448-9b52-6927d5013b85" (UID: "b2532d65-e9f2-4448-9b52-6927d5013b85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.349596 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm" (OuterVolumeSpecName: "kube-api-access-5flgm") pod "b2532d65-e9f2-4448-9b52-6927d5013b85" (UID: "b2532d65-e9f2-4448-9b52-6927d5013b85"). InnerVolumeSpecName "kube-api-access-5flgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.445649 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5flgm\" (UniqueName: \"kubernetes.io/projected/b2532d65-e9f2-4448-9b52-6927d5013b85-kube-api-access-5flgm\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.445697 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.473906 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2532d65-e9f2-4448-9b52-6927d5013b85" (UID: "b2532d65-e9f2-4448-9b52-6927d5013b85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.547426 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2532d65-e9f2-4448-9b52-6927d5013b85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.671874 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerID="f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7" exitCode=0 Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.671918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerDied","Data":"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7"} Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.671960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlxh4" event={"ID":"b2532d65-e9f2-4448-9b52-6927d5013b85","Type":"ContainerDied","Data":"3a62e64446c83236fa8d0660d5c4ec56d91846e5d9183faa709081dc02183b52"} Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.671959 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlxh4" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.671982 4865 scope.go:117] "RemoveContainer" containerID="f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.707948 4865 scope.go:117] "RemoveContainer" containerID="f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.727506 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.737093 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hlxh4"] Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.744294 4865 scope.go:117] "RemoveContainer" containerID="2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.846619 4865 scope.go:117] "RemoveContainer" containerID="f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7" Feb 16 23:39:20 crc kubenswrapper[4865]: E0216 23:39:20.847222 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7\": container with ID starting with f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7 not found: ID does not exist" containerID="f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.847250 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7"} err="failed to get container status \"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7\": rpc error: code = NotFound desc = could not find container \"f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7\": container with ID starting with f98ef83ef520ca42aadba7308d00e6b70a5f3d2b0c9406517f7f1185c29f52f7 not found: ID does not exist" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.848232 4865 scope.go:117] "RemoveContainer" containerID="f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d" Feb 16 23:39:20 crc kubenswrapper[4865]: E0216 23:39:20.850884 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d\": container with ID starting with f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d not found: ID does not exist" containerID="f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.850945 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d"} err="failed to get container status \"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d\": rpc error: code = NotFound desc = could not find container \"f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d\": container with ID starting with f0746d9115bdca61a15c7b60862f706ac85062271df2d2281fe18af5eab7ca1d not found: ID does not exist" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.850979 4865 scope.go:117] "RemoveContainer" containerID="2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22" Feb 16 23:39:20 crc kubenswrapper[4865]: E0216 23:39:20.854068 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22\": container with ID starting with 2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22 not found: ID does not exist" containerID="2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22" Feb 16 23:39:20 crc kubenswrapper[4865]: I0216 23:39:20.854103 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22"} err="failed to get container status \"2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22\": rpc error: code = NotFound desc = could not find container \"2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22\": container with ID starting with 2bdf0efd659693d1270130e3ab84993d2b98dd213a898cacbac8f0692b647a22 not found: ID does not exist" Feb 16 23:39:22 crc kubenswrapper[4865]: I0216 23:39:22.415642 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:39:22 crc kubenswrapper[4865]: E0216 23:39:22.424359 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:39:22 crc kubenswrapper[4865]: I0216 23:39:22.441916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" path="/var/lib/kubelet/pods/b2532d65-e9f2-4448-9b52-6927d5013b85/volumes" Feb 16 23:39:34 crc kubenswrapper[4865]: I0216 23:39:34.415463 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:39:34 crc kubenswrapper[4865]: E0216 23:39:34.416605 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:39:45 crc kubenswrapper[4865]: I0216 23:39:45.414856 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:39:45 crc kubenswrapper[4865]: E0216 23:39:45.416209 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:39:58 crc kubenswrapper[4865]: I0216 23:39:58.414886 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:39:58 crc kubenswrapper[4865]: E0216 23:39:58.415980 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:40:13 crc kubenswrapper[4865]: I0216 23:40:13.414955 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:40:13 crc kubenswrapper[4865]: E0216 23:40:13.415754 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:40:24 crc kubenswrapper[4865]: I0216 23:40:24.414629 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:40:24 crc kubenswrapper[4865]: E0216 23:40:24.415347 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:40:39 crc kubenswrapper[4865]: I0216 23:40:39.414754 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:40:39 crc kubenswrapper[4865]: E0216 23:40:39.415635 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:40:52 crc kubenswrapper[4865]: I0216 23:40:52.414271 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:40:53 crc kubenswrapper[4865]: I0216 23:40:53.660924 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b"} Feb 16 23:41:12 crc kubenswrapper[4865]: I0216 23:41:12.834597 4865 generic.go:334] "Generic (PLEG): container finished" podID="dc785498-c658-47ed-8329-0e8c81c771be" containerID="80149e643f12b9bc24d275350aba68b6cb9840ea31b2d1b121ab45472ec6fe0b" exitCode=0 Feb 16 23:41:12 crc kubenswrapper[4865]: I0216 23:41:12.834658 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc785498-c658-47ed-8329-0e8c81c771be","Type":"ContainerDied","Data":"80149e643f12b9bc24d275350aba68b6cb9840ea31b2d1b121ab45472ec6fe0b"} Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.254122 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334513 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334679 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334705 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334769 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334829 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lhmf\" (UniqueName: \"kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334910 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334945 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.334969 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir\") pod \"dc785498-c658-47ed-8329-0e8c81c771be\" (UID: \"dc785498-c658-47ed-8329-0e8c81c771be\") " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.336072 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.336230 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data" (OuterVolumeSpecName: "config-data") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.342456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf" (OuterVolumeSpecName: "kube-api-access-4lhmf") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "kube-api-access-4lhmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.344395 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.349966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.363651 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.371260 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.379361 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.404222 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dc785498-c658-47ed-8329-0e8c81c771be" (UID: "dc785498-c658-47ed-8329-0e8c81c771be"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437539 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437569 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437579 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dc785498-c658-47ed-8329-0e8c81c771be-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437589 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437600 4865 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437619 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437628 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437637 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc785498-c658-47ed-8329-0e8c81c771be-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.437648 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lhmf\" (UniqueName: \"kubernetes.io/projected/dc785498-c658-47ed-8329-0e8c81c771be-kube-api-access-4lhmf\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.459245 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.539528 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.859202 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dc785498-c658-47ed-8329-0e8c81c771be","Type":"ContainerDied","Data":"5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee"} Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.859330 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 16 23:41:14 crc kubenswrapper[4865]: I0216 23:41:14.859333 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6ed13b08f907ff8003fcb9e5d25783993c6c25a162f58109e68beb962896ee" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.926584 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927766 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927792 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927818 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927828 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927850 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927863 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927878 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927888 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927906 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927916 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927955 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927965 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.927981 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.927990 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928005 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928015 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="extract-utilities" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928034 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928044 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928062 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928071 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928085 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc785498-c658-47ed-8329-0e8c81c771be" containerName="tempest-tests-tempest-tests-runner" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928094 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc785498-c658-47ed-8329-0e8c81c771be" containerName="tempest-tests-tempest-tests-runner" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928123 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928136 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="extract-content" Feb 16 23:41:16 crc kubenswrapper[4865]: E0216 23:41:16.928151 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928160 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928478 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="faeb82b7-bf9e-4cab-8d7f-9740b68e0826" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928502 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="74022cbf-c009-4efd-bc26-fb7b370f0dca" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928522 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2532d65-e9f2-4448-9b52-6927d5013b85" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928557 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc785498-c658-47ed-8329-0e8c81c771be" containerName="tempest-tests-tempest-tests-runner" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.928577 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="14681c95-5172-4788-b54a-f207c2beb1e9" containerName="registry-server" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.929508 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.932131 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9qvcb" Feb 16 23:41:16 crc kubenswrapper[4865]: I0216 23:41:16.938923 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.089950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.090038 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzs6\" (UniqueName: \"kubernetes.io/projected/b3290302-9cc6-4e19-8492-1179e4163169-kube-api-access-8rzs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.192125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.192413 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzs6\" (UniqueName: \"kubernetes.io/projected/b3290302-9cc6-4e19-8492-1179e4163169-kube-api-access-8rzs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.192725 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.234953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzs6\" (UniqueName: \"kubernetes.io/projected/b3290302-9cc6-4e19-8492-1179e4163169-kube-api-access-8rzs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.246573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3290302-9cc6-4e19-8492-1179e4163169\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.275840 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.780268 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 16 23:41:17 crc kubenswrapper[4865]: I0216 23:41:17.891873 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b3290302-9cc6-4e19-8492-1179e4163169","Type":"ContainerStarted","Data":"8992904c6aff5265baa033bd56191d8b0ff5efd8e9a2066fd7d704cae7aed616"} Feb 16 23:41:19 crc kubenswrapper[4865]: I0216 23:41:19.922693 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b3290302-9cc6-4e19-8492-1179e4163169","Type":"ContainerStarted","Data":"64b7c6e45a6b0eee05c4d0bd809b6fa56f7ee3d8b22446a11428ab2f479bcaac"} Feb 16 23:41:19 crc kubenswrapper[4865]: I0216 23:41:19.938890 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.02922571 podStartE2EDuration="3.938866709s" podCreationTimestamp="2026-02-16 23:41:16 +0000 UTC" firstStartedPulling="2026-02-16 23:41:17.780416389 +0000 UTC m=+3318.104123360" lastFinishedPulling="2026-02-16 23:41:18.690057398 +0000 UTC m=+3319.013764359" observedRunningTime="2026-02-16 23:41:19.93748095 +0000 UTC m=+3320.261187911" watchObservedRunningTime="2026-02-16 23:41:19.938866709 +0000 UTC m=+3320.262573680" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.256429 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzbxn/must-gather-kgv9w"] Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.258515 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.261936 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzbxn"/"kube-root-ca.crt" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.263098 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzbxn"/"openshift-service-ca.crt" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.286199 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzbxn/must-gather-kgv9w"] Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.396237 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.396339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqktb\" (UniqueName: \"kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.499181 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.499259 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqktb\" (UniqueName: \"kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.500320 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.523909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqktb\" (UniqueName: \"kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb\") pod \"must-gather-kgv9w\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:40 crc kubenswrapper[4865]: I0216 23:41:40.579429 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:41:41 crc kubenswrapper[4865]: I0216 23:41:41.061426 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzbxn/must-gather-kgv9w"] Feb 16 23:41:41 crc kubenswrapper[4865]: I0216 23:41:41.138428 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" event={"ID":"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8","Type":"ContainerStarted","Data":"d135a204fff8326e7db5adf5a5fc38e53e4dad7e24fa0e23309ca35c49b4bf5e"} Feb 16 23:41:48 crc kubenswrapper[4865]: I0216 23:41:48.205592 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" event={"ID":"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8","Type":"ContainerStarted","Data":"3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df"} Feb 16 23:41:48 crc kubenswrapper[4865]: I0216 23:41:48.206208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" event={"ID":"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8","Type":"ContainerStarted","Data":"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737"} Feb 16 23:41:48 crc kubenswrapper[4865]: I0216 23:41:48.243139 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" podStartSLOduration=2.022880829 podStartE2EDuration="8.243123312s" podCreationTimestamp="2026-02-16 23:41:40 +0000 UTC" firstStartedPulling="2026-02-16 23:41:41.062817531 +0000 UTC m=+3341.386524492" lastFinishedPulling="2026-02-16 23:41:47.283060004 +0000 UTC m=+3347.606766975" observedRunningTime="2026-02-16 23:41:48.242843994 +0000 UTC m=+3348.566550965" watchObservedRunningTime="2026-02-16 23:41:48.243123312 +0000 UTC m=+3348.566830273" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.350004 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-nmlr9"] Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.351912 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.354749 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qzbxn"/"default-dockercfg-97vhc" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.442885 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.443242 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqp45\" (UniqueName: \"kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.545027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.545163 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.545676 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqp45\" (UniqueName: \"kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.571873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqp45\" (UniqueName: \"kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45\") pod \"crc-debug-nmlr9\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: I0216 23:41:51.677479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:41:51 crc kubenswrapper[4865]: W0216 23:41:51.714423 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49797336_0826_45f0_91e1_6d2b05f9618c.slice/crio-ac917481f2dd48f554b744176857c180c4300e615a8a2523ecad9aede73ddc6e WatchSource:0}: Error finding container ac917481f2dd48f554b744176857c180c4300e615a8a2523ecad9aede73ddc6e: Status 404 returned error can't find the container with id ac917481f2dd48f554b744176857c180c4300e615a8a2523ecad9aede73ddc6e Feb 16 23:41:52 crc kubenswrapper[4865]: I0216 23:41:52.248938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" event={"ID":"49797336-0826-45f0-91e1-6d2b05f9618c","Type":"ContainerStarted","Data":"ac917481f2dd48f554b744176857c180c4300e615a8a2523ecad9aede73ddc6e"} Feb 16 23:42:05 crc kubenswrapper[4865]: I0216 23:42:05.374413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" event={"ID":"49797336-0826-45f0-91e1-6d2b05f9618c","Type":"ContainerStarted","Data":"1fcda18e1d8691cb5d0c02e5577dc4a934006dcb3108e0dffdff8328e76bc90c"} Feb 16 23:42:05 crc kubenswrapper[4865]: I0216 23:42:05.393937 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" podStartSLOduration=1.816878387 podStartE2EDuration="14.393917334s" podCreationTimestamp="2026-02-16 23:41:51 +0000 UTC" firstStartedPulling="2026-02-16 23:41:51.717561592 +0000 UTC m=+3352.041268553" lastFinishedPulling="2026-02-16 23:42:04.294600499 +0000 UTC m=+3364.618307500" observedRunningTime="2026-02-16 23:42:05.385109056 +0000 UTC m=+3365.708816017" watchObservedRunningTime="2026-02-16 23:42:05.393917334 +0000 UTC m=+3365.717624315" Feb 16 23:42:48 crc kubenswrapper[4865]: I0216 23:42:48.770986 4865 generic.go:334] "Generic (PLEG): container finished" podID="49797336-0826-45f0-91e1-6d2b05f9618c" containerID="1fcda18e1d8691cb5d0c02e5577dc4a934006dcb3108e0dffdff8328e76bc90c" exitCode=0 Feb 16 23:42:48 crc kubenswrapper[4865]: I0216 23:42:48.771129 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" event={"ID":"49797336-0826-45f0-91e1-6d2b05f9618c","Type":"ContainerDied","Data":"1fcda18e1d8691cb5d0c02e5577dc4a934006dcb3108e0dffdff8328e76bc90c"} Feb 16 23:42:49 crc kubenswrapper[4865]: I0216 23:42:49.947520 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:42:49 crc kubenswrapper[4865]: I0216 23:42:49.998549 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-nmlr9"] Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.014168 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-nmlr9"] Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.110694 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqp45\" (UniqueName: \"kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45\") pod \"49797336-0826-45f0-91e1-6d2b05f9618c\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.111299 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host\") pod \"49797336-0826-45f0-91e1-6d2b05f9618c\" (UID: \"49797336-0826-45f0-91e1-6d2b05f9618c\") " Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.111511 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host" (OuterVolumeSpecName: "host") pod "49797336-0826-45f0-91e1-6d2b05f9618c" (UID: "49797336-0826-45f0-91e1-6d2b05f9618c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.113028 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49797336-0826-45f0-91e1-6d2b05f9618c-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.121401 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45" (OuterVolumeSpecName: "kube-api-access-mqp45") pod "49797336-0826-45f0-91e1-6d2b05f9618c" (UID: "49797336-0826-45f0-91e1-6d2b05f9618c"). InnerVolumeSpecName "kube-api-access-mqp45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.214480 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqp45\" (UniqueName: \"kubernetes.io/projected/49797336-0826-45f0-91e1-6d2b05f9618c-kube-api-access-mqp45\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.438794 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49797336-0826-45f0-91e1-6d2b05f9618c" path="/var/lib/kubelet/pods/49797336-0826-45f0-91e1-6d2b05f9618c/volumes" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.811809 4865 scope.go:117] "RemoveContainer" containerID="1fcda18e1d8691cb5d0c02e5577dc4a934006dcb3108e0dffdff8328e76bc90c" Feb 16 23:42:50 crc kubenswrapper[4865]: I0216 23:42:50.811912 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-nmlr9" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.243370 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-knbhs"] Feb 16 23:42:51 crc kubenswrapper[4865]: E0216 23:42:51.244738 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49797336-0826-45f0-91e1-6d2b05f9618c" containerName="container-00" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.244779 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="49797336-0826-45f0-91e1-6d2b05f9618c" containerName="container-00" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.245501 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="49797336-0826-45f0-91e1-6d2b05f9618c" containerName="container-00" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.247121 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.251074 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qzbxn"/"default-dockercfg-97vhc" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.346494 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.346786 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzzr\" (UniqueName: \"kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.449237 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzzr\" (UniqueName: \"kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.449522 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.449755 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.491263 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzzr\" (UniqueName: \"kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr\") pod \"crc-debug-knbhs\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.572998 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:51 crc kubenswrapper[4865]: I0216 23:42:51.823729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" event={"ID":"caba3328-93dc-4751-9888-661a8083334c","Type":"ContainerStarted","Data":"a1446058fdfb7b287b37478c0d25e7302e95a09a1a696e7437656025b75eea61"} Feb 16 23:42:52 crc kubenswrapper[4865]: I0216 23:42:52.843242 4865 generic.go:334] "Generic (PLEG): container finished" podID="caba3328-93dc-4751-9888-661a8083334c" containerID="51b83ea31ed33c900028532413816fa6934a01eccec6558f87f3d5956d6206da" exitCode=0 Feb 16 23:42:52 crc kubenswrapper[4865]: I0216 23:42:52.843353 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" event={"ID":"caba3328-93dc-4751-9888-661a8083334c","Type":"ContainerDied","Data":"51b83ea31ed33c900028532413816fa6934a01eccec6558f87f3d5956d6206da"} Feb 16 23:42:53 crc kubenswrapper[4865]: I0216 23:42:53.412424 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-knbhs"] Feb 16 23:42:53 crc kubenswrapper[4865]: I0216 23:42:53.419957 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-knbhs"] Feb 16 23:42:53 crc kubenswrapper[4865]: I0216 23:42:53.982533 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.109216 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gzzr\" (UniqueName: \"kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr\") pod \"caba3328-93dc-4751-9888-661a8083334c\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.109544 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host\") pod \"caba3328-93dc-4751-9888-661a8083334c\" (UID: \"caba3328-93dc-4751-9888-661a8083334c\") " Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.109725 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host" (OuterVolumeSpecName: "host") pod "caba3328-93dc-4751-9888-661a8083334c" (UID: "caba3328-93dc-4751-9888-661a8083334c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.110480 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/caba3328-93dc-4751-9888-661a8083334c-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.120390 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr" (OuterVolumeSpecName: "kube-api-access-4gzzr") pod "caba3328-93dc-4751-9888-661a8083334c" (UID: "caba3328-93dc-4751-9888-661a8083334c"). InnerVolumeSpecName "kube-api-access-4gzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.212217 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gzzr\" (UniqueName: \"kubernetes.io/projected/caba3328-93dc-4751-9888-661a8083334c-kube-api-access-4gzzr\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.448140 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caba3328-93dc-4751-9888-661a8083334c" path="/var/lib/kubelet/pods/caba3328-93dc-4751-9888-661a8083334c/volumes" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.661453 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-xg4vv"] Feb 16 23:42:54 crc kubenswrapper[4865]: E0216 23:42:54.662182 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caba3328-93dc-4751-9888-661a8083334c" containerName="container-00" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.662216 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="caba3328-93dc-4751-9888-661a8083334c" containerName="container-00" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.662534 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="caba3328-93dc-4751-9888-661a8083334c" containerName="container-00" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.663552 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.847573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tnj\" (UniqueName: \"kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.848487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.873247 4865 scope.go:117] "RemoveContainer" containerID="51b83ea31ed33c900028532413816fa6934a01eccec6558f87f3d5956d6206da" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.873363 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-knbhs" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.950745 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tnj\" (UniqueName: \"kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.950859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.951095 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.984505 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tnj\" (UniqueName: \"kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj\") pod \"crc-debug-xg4vv\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:54 crc kubenswrapper[4865]: I0216 23:42:54.989457 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:55 crc kubenswrapper[4865]: I0216 23:42:55.900229 4865 generic.go:334] "Generic (PLEG): container finished" podID="6dc308e9-8868-4f33-b3d9-05c6b213b112" containerID="0c873ff7c9002e51ca77b69b97417a9d94cf8bfb1e9bab0739b040ae7cf7dfcd" exitCode=0 Feb 16 23:42:55 crc kubenswrapper[4865]: I0216 23:42:55.900308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" event={"ID":"6dc308e9-8868-4f33-b3d9-05c6b213b112","Type":"ContainerDied","Data":"0c873ff7c9002e51ca77b69b97417a9d94cf8bfb1e9bab0739b040ae7cf7dfcd"} Feb 16 23:42:55 crc kubenswrapper[4865]: I0216 23:42:55.900699 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" event={"ID":"6dc308e9-8868-4f33-b3d9-05c6b213b112","Type":"ContainerStarted","Data":"b091591112df5b8f87892b52b5f48a87826288a54932d8d1dc2103399bb6b3e9"} Feb 16 23:42:55 crc kubenswrapper[4865]: I0216 23:42:55.972164 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-xg4vv"] Feb 16 23:42:55 crc kubenswrapper[4865]: I0216 23:42:55.992068 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzbxn/crc-debug-xg4vv"] Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.013999 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.103981 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host\") pod \"6dc308e9-8868-4f33-b3d9-05c6b213b112\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.104102 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host" (OuterVolumeSpecName: "host") pod "6dc308e9-8868-4f33-b3d9-05c6b213b112" (UID: "6dc308e9-8868-4f33-b3d9-05c6b213b112"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.104139 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2tnj\" (UniqueName: \"kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj\") pod \"6dc308e9-8868-4f33-b3d9-05c6b213b112\" (UID: \"6dc308e9-8868-4f33-b3d9-05c6b213b112\") " Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.104991 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6dc308e9-8868-4f33-b3d9-05c6b213b112-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.110822 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj" (OuterVolumeSpecName: "kube-api-access-j2tnj") pod "6dc308e9-8868-4f33-b3d9-05c6b213b112" (UID: "6dc308e9-8868-4f33-b3d9-05c6b213b112"). InnerVolumeSpecName "kube-api-access-j2tnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.207170 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2tnj\" (UniqueName: \"kubernetes.io/projected/6dc308e9-8868-4f33-b3d9-05c6b213b112-kube-api-access-j2tnj\") on node \"crc\" DevicePath \"\"" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.923794 4865 scope.go:117] "RemoveContainer" containerID="0c873ff7c9002e51ca77b69b97417a9d94cf8bfb1e9bab0739b040ae7cf7dfcd" Feb 16 23:42:57 crc kubenswrapper[4865]: I0216 23:42:57.923860 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/crc-debug-xg4vv" Feb 16 23:42:58 crc kubenswrapper[4865]: I0216 23:42:58.430705 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc308e9-8868-4f33-b3d9-05c6b213b112" path="/var/lib/kubelet/pods/6dc308e9-8868-4f33-b3d9-05c6b213b112/volumes" Feb 16 23:43:12 crc kubenswrapper[4865]: I0216 23:43:12.511076 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54bd7477c8-zrrzr_8c454ac8-1c92-42d1-a889-6f42e4d73f86/barbican-api/0.log" Feb 16 23:43:12 crc kubenswrapper[4865]: I0216 23:43:12.736199 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54bd7477c8-zrrzr_8c454ac8-1c92-42d1-a889-6f42e4d73f86/barbican-api-log/0.log" Feb 16 23:43:12 crc kubenswrapper[4865]: I0216 23:43:12.748775 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6779c894-4z8tf_14d1a57c-7cda-4753-a6de-fe9a98f4fd02/barbican-keystone-listener/0.log" Feb 16 23:43:12 crc kubenswrapper[4865]: I0216 23:43:12.864308 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6779c894-4z8tf_14d1a57c-7cda-4753-a6de-fe9a98f4fd02/barbican-keystone-listener-log/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.018449 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c998ff9-ghm8t_6633f123-ac1f-4a25-b20d-0c0eda648f92/barbican-worker/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.047554 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c998ff9-ghm8t_6633f123-ac1f-4a25-b20d-0c0eda648f92/barbican-worker-log/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.261542 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/ceilometer-central-agent/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.272565 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-js45z_24da9b19-2d45-4f18-a79e-bf378e4ee44d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.321616 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/ceilometer-notification-agent/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.464433 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/proxy-httpd/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.472876 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/sg-core/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.584850 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5/cinder-api/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.647303 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5/cinder-api-log/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.780550 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_04210f96-20a4-48af-b1cb-f7ea73adc9a3/cinder-scheduler/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.836839 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_04210f96-20a4-48af-b1cb-f7ea73adc9a3/probe/0.log" Feb 16 23:43:13 crc kubenswrapper[4865]: I0216 23:43:13.965200 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh_4e39dd59-456f-42dd-bc53-254730e44297/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.131864 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6_058417d9-13ea-48ba-8bf8-2cdf141c94b6/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.189528 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/init/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.441466 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/dnsmasq-dns/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.459802 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/init/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.514778 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd_45600784-63ad-4273-ab6d-5732fc0988e6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.652808 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_443145e3-8ed2-4863-bde1-9b932b22ef00/glance-httpd/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.714617 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_443145e3-8ed2-4863-bde1-9b932b22ef00/glance-log/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.848683 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_60070db1-5a47-4b70-b318-46f3745677c5/glance-httpd/0.log" Feb 16 23:43:14 crc kubenswrapper[4865]: I0216 23:43:14.918393 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_60070db1-5a47-4b70-b318-46f3745677c5/glance-log/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.248158 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kgskk_9bd41f0a-9736-4ede-8d1f-5c39bda1db42/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.279038 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7ff854866d-9gv97_17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a/horizon/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.499447 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7ff854866d-9gv97_17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a/horizon-log/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.544423 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-blwr8_d3a477d8-8710-4da3-b229-8787e3787f46/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.663730 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.663783 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.766703 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66c88cfbc7-mhfsh_cd912ee6-bda4-4859-a70d-3f53ca61ba60/keystone-api/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.784099 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_381c66d6-4d83-453d-bb97-35888127917f/kube-state-metrics/0.log" Feb 16 23:43:15 crc kubenswrapper[4865]: I0216 23:43:15.957385 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl_de77d0a7-2fdd-48d9-a2ba-827deafc0437/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:16 crc kubenswrapper[4865]: I0216 23:43:16.291082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f9dbdcc7-p5njv_1828fcf9-f296-46f5-a15d-7280fe715721/neutron-httpd/0.log" Feb 16 23:43:16 crc kubenswrapper[4865]: I0216 23:43:16.393169 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f9dbdcc7-p5njv_1828fcf9-f296-46f5-a15d-7280fe715721/neutron-api/0.log" Feb 16 23:43:16 crc kubenswrapper[4865]: I0216 23:43:16.535242 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2_ea192f95-6e32-46e1-ac67-715417874376/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:16 crc kubenswrapper[4865]: I0216 23:43:16.975233 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf4d46ef-86f3-450e-9b46-a0ee9085e51d/nova-cell0-conductor-conductor/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.042065 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a4b1542-e38f-4ebf-9ca9-028ced41d506/nova-api-log/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.130896 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a4b1542-e38f-4ebf-9ca9-028ced41d506/nova-api-api/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.239512 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e94e6b7d-55e6-4b25-9663-6cdc0440681f/nova-cell1-conductor-conductor/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.296085 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.545180 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wd8vd_14a00f0e-5a36-481b-a8ad-78032cfa0616/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:17 crc kubenswrapper[4865]: I0216 23:43:17.633114 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_811614ef-6229-489e-8da4-e1d4b1a5d5fd/nova-metadata-log/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.022145 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a0810695-85aa-432f-8a1d-f5bf69077393/nova-scheduler-scheduler/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.164418 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/mysql-bootstrap/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.403676 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/galera/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.405486 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/mysql-bootstrap/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.633955 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/mysql-bootstrap/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.728589 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_811614ef-6229-489e-8da4-e1d4b1a5d5fd/nova-metadata-metadata/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.776675 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/mysql-bootstrap/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.845139 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/galera/0.log" Feb 16 23:43:18 crc kubenswrapper[4865]: I0216 23:43:18.947058 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_328cf5b9-9c5d-4cfa-ae62-1ab76d210788/openstackclient/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.084541 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cllwp_0934d4bc-f8b7-4fbb-9309-20826e6aa578/openstack-network-exporter/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.187127 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server-init/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.463114 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server-init/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.486202 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovs-vswitchd/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.501656 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.620916 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-plt5q_abf5edf2-8442-4aca-b35b-051b9f366b9a/ovn-controller/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.743334 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cbbp2_1c70b630-7dee-4749-9903-9d0f2e3b9196/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.833728 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_485d0f59-bf5e-43d4-b35e-e4a40273a666/openstack-network-exporter/0.log" Feb 16 23:43:19 crc kubenswrapper[4865]: I0216 23:43:19.992972 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_485d0f59-bf5e-43d4-b35e-e4a40273a666/ovn-northd/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.017932 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_99094c44-3d04-4263-a6b7-efc49f5e0fa2/openstack-network-exporter/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.083590 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_99094c44-3d04-4263-a6b7-efc49f5e0fa2/ovsdbserver-nb/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.233010 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae2d74d5-cebc-4243-a288-d6d901192de7/openstack-network-exporter/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.240123 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae2d74d5-cebc-4243-a288-d6d901192de7/ovsdbserver-sb/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.510061 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5695f8dc4-jj7h5_51d7f054-f0b9-43fc-b704-ac61bd427bb0/placement-api/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.585387 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5695f8dc4-jj7h5_51d7f054-f0b9-43fc-b704-ac61bd427bb0/placement-log/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.606568 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/setup-container/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.831925 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/rabbitmq/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.854166 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/setup-container/0.log" Feb 16 23:43:20 crc kubenswrapper[4865]: I0216 23:43:20.905815 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/setup-container/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.078237 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/setup-container/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.139029 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/rabbitmq/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.190382 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-s867f_c2f56f0d-1a38-4756-b13f-e961a66b7594/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.335704 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-6crtq_82582c93-5f30-417e-a5f1-62038c6f8000/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.406471 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt_1bbe0349-2def-4238-880b-5cd6ed9e0413/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.612846 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lwkkt_86d1001f-6633-4b05-8a8f-cee820027d08/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.669269 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wdfb7_15b21ee4-d297-4297-9752-c0642717510e/ssh-known-hosts-edpm-deployment/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.918078 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b5cb7cc4c-8d58d_a7143f0f-06af-4d75-960a-2488e9b131bc/proxy-httpd/0.log" Feb 16 23:43:21 crc kubenswrapper[4865]: I0216 23:43:21.971690 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b5cb7cc4c-8d58d_a7143f0f-06af-4d75-960a-2488e9b131bc/proxy-server/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.112446 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gq4bx_342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3/swift-ring-rebalance/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.154562 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-auditor/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.172363 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-reaper/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.398242 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-auditor/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.411176 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-server/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.421384 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-replicator/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.528382 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-replicator/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.591110 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-server/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.633243 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-updater/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.645067 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-auditor/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.789752 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-expirer/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.855965 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-replicator/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.897271 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-server/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.925270 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-updater/0.log" Feb 16 23:43:22 crc kubenswrapper[4865]: I0216 23:43:22.998901 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/rsync/0.log" Feb 16 23:43:23 crc kubenswrapper[4865]: I0216 23:43:23.114921 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/swift-recon-cron/0.log" Feb 16 23:43:23 crc kubenswrapper[4865]: I0216 23:43:23.251622 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj_56a9e58a-8161-4d27-96d4-1459ec03b3ed/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:23 crc kubenswrapper[4865]: I0216 23:43:23.335254 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_dc785498-c658-47ed-8329-0e8c81c771be/tempest-tests-tempest-tests-runner/0.log" Feb 16 23:43:23 crc kubenswrapper[4865]: I0216 23:43:23.528085 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b3290302-9cc6-4e19-8492-1179e4163169/test-operator-logs-container/0.log" Feb 16 23:43:23 crc kubenswrapper[4865]: I0216 23:43:23.588589 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bplzx_9cb0e39e-0d5d-4758-a44e-06867bdf08da/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:43:30 crc kubenswrapper[4865]: I0216 23:43:30.863955 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5568f4b1-9ca1-4de9-9355-ffc7b0281375/memcached/0.log" Feb 16 23:43:45 crc kubenswrapper[4865]: I0216 23:43:45.664893 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:43:45 crc kubenswrapper[4865]: I0216 23:43:45.665430 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:43:50 crc kubenswrapper[4865]: I0216 23:43:50.921393 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.104645 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.120550 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.144795 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.307649 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.315809 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/extract/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.329081 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:43:51 crc kubenswrapper[4865]: I0216 23:43:51.859032 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-5ntcr_829ee3ed-5827-46ee-8399-f0b82ffa4d1d/manager/0.log" Feb 16 23:43:52 crc kubenswrapper[4865]: I0216 23:43:52.235218 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-kvmr9_61ac3013-99f7-4aef-b85d-8675044accc6/manager/0.log" Feb 16 23:43:52 crc kubenswrapper[4865]: I0216 23:43:52.296196 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-jmmxl_f1b2a884-8e78-47ac-9c45-7861a81e02d4/manager/0.log" Feb 16 23:43:52 crc kubenswrapper[4865]: I0216 23:43:52.601660 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-vc4dc_73e02b9a-66d8-4fb4-bc3d-13610563b6e4/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.258686 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wl4zd_dc7842ab-52e5-4223-8b2a-ab09641bf297/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.288223 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mt6fh_812a9f63-a231-495c-9474-0c60929fabff/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.299452 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-msxb8_3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.606148 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-stf5r_68b414dd-a0c6-488a-b253-1a3f477cb7a8/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.792158 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-phdd6_dded450f-3a37-48b0-84fc-1de3c64c1954/manager/0.log" Feb 16 23:43:53 crc kubenswrapper[4865]: I0216 23:43:53.893697 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-hw4fs_a1759f72-1644-42e2-9b67-01478800870b/manager/0.log" Feb 16 23:43:54 crc kubenswrapper[4865]: I0216 23:43:54.137113 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-wk47p_f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b/manager/0.log" Feb 16 23:43:54 crc kubenswrapper[4865]: I0216 23:43:54.476645 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-xzpc9_3be11752-93fd-4edc-b100-0bfd29f599e8/manager/0.log" Feb 16 23:43:54 crc kubenswrapper[4865]: I0216 23:43:54.704409 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz_a9614d13-aca5-4ffa-9cc1-dd8767e11ac4/manager/0.log" Feb 16 23:43:55 crc kubenswrapper[4865]: I0216 23:43:55.196057 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7d7c89f976-vxpzk_5c43d211-62d0-403c-90d5-00c0bfcfa692/operator/0.log" Feb 16 23:43:55 crc kubenswrapper[4865]: I0216 23:43:55.419631 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ndhqn_4adfbea3-c2d3-45a2-8858-8a1f867ebf5b/registry-server/0.log" Feb 16 23:43:55 crc kubenswrapper[4865]: I0216 23:43:55.657219 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-946kc_60e0dd0a-0055-45ec-8a4c-f0c23cd214b6/manager/0.log" Feb 16 23:43:55 crc kubenswrapper[4865]: I0216 23:43:55.832189 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-9tnrt_196fc76c-2c5d-45ec-8106-4d0a3382d16e/manager/0.log" Feb 16 23:43:55 crc kubenswrapper[4865]: I0216 23:43:55.984640 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-m8rmj_2b4f33b1-b5a3-4935-8036-deb97cfedfe7/manager/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.021651 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zxt7q_f0d444ee-7bd9-40ed-ab3a-766aa716336c/operator/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.278684 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s9vk2_5d77ae74-7238-4c9f-8ae1-33064d8824c2/manager/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.398014 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-dz8t2_da795bac-53b5-415b-9297-26e5502fceb8/manager/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.511712 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-29v4v_21f8cf30-0215-4501-af0f-ff1220d4252b/manager/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.718215 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-m62lh_1cfcc69c-1d21-4b1e-894d-d3ae72c39513/manager/0.log" Feb 16 23:43:56 crc kubenswrapper[4865]: I0216 23:43:56.829212 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85988dbd5c-sb7sh_24704625-9cce-4f47-847c-ab4d95d3adb1/manager/0.log" Feb 16 23:43:58 crc kubenswrapper[4865]: I0216 23:43:58.921930 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-7nt98_a5025501-39c8-43ae-8b94-3a555517b1f7/manager/0.log" Feb 16 23:44:15 crc kubenswrapper[4865]: I0216 23:44:15.663952 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:44:15 crc kubenswrapper[4865]: I0216 23:44:15.664995 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:44:15 crc kubenswrapper[4865]: I0216 23:44:15.665092 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:44:15 crc kubenswrapper[4865]: I0216 23:44:15.666453 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:44:15 crc kubenswrapper[4865]: I0216 23:44:15.666574 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b" gracePeriod=600 Feb 16 23:44:16 crc kubenswrapper[4865]: I0216 23:44:16.693903 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b" exitCode=0 Feb 16 23:44:16 crc kubenswrapper[4865]: I0216 23:44:16.693945 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b"} Feb 16 23:44:16 crc kubenswrapper[4865]: I0216 23:44:16.694302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591"} Feb 16 23:44:16 crc kubenswrapper[4865]: I0216 23:44:16.694334 4865 scope.go:117] "RemoveContainer" containerID="9595289612fbba036a906a5efffdc15200b748234a5d76519393359b8b0d5d06" Feb 16 23:44:17 crc kubenswrapper[4865]: I0216 23:44:17.812741 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qcl4t_5cfba6b6-3d1e-49d9-902e-b3493e1ffc97/control-plane-machine-set-operator/0.log" Feb 16 23:44:17 crc kubenswrapper[4865]: I0216 23:44:17.994831 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-676jr_c4592f72-2b39-47bf-beed-e53bf3865b22/kube-rbac-proxy/0.log" Feb 16 23:44:18 crc kubenswrapper[4865]: I0216 23:44:18.029149 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-676jr_c4592f72-2b39-47bf-beed-e53bf3865b22/machine-api-operator/0.log" Feb 16 23:44:32 crc kubenswrapper[4865]: I0216 23:44:32.031253 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8wctr_bde79990-dee1-4694-bf0c-f569702b84c6/cert-manager-controller/0.log" Feb 16 23:44:32 crc kubenswrapper[4865]: I0216 23:44:32.198517 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nwktn_799f3815-d78f-449e-b798-63000e62d953/cert-manager-cainjector/0.log" Feb 16 23:44:32 crc kubenswrapper[4865]: I0216 23:44:32.310770 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pzb4b_bbc85b0c-aae5-4657-8c81-fed6b49e5d5d/cert-manager-webhook/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.436669 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5q9z6_bee98b00-b363-4ff6-986b-33b5086b8453/nmstate-console-plugin/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.790183 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4jnsb_1b2d9b3b-4c11-4bae-9930-68b45a15ba52/nmstate-metrics/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.796005 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pmm2s_212784bf-c832-42e4-92c0-b1c81994982f/nmstate-handler/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.821301 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4jnsb_1b2d9b3b-4c11-4bae-9930-68b45a15ba52/kube-rbac-proxy/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.956031 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-hj5j9_5c652e54-0a32-41f0-844b-4f00cdb36ec3/nmstate-operator/0.log" Feb 16 23:44:46 crc kubenswrapper[4865]: I0216 23:44:46.992976 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-7fpp6_1ab27e8f-8d04-461e-8726-1ca46394c9b6/nmstate-webhook/0.log" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.160999 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c"] Feb 16 23:45:00 crc kubenswrapper[4865]: E0216 23:45:00.161960 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc308e9-8868-4f33-b3d9-05c6b213b112" containerName="container-00" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.161973 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc308e9-8868-4f33-b3d9-05c6b213b112" containerName="container-00" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.162179 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc308e9-8868-4f33-b3d9-05c6b213b112" containerName="container-00" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.162781 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.165103 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.165658 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.178071 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c"] Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.302472 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.302895 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gz2m\" (UniqueName: \"kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.303054 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.407402 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gz2m\" (UniqueName: \"kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.407498 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.407543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.410210 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.419094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.422443 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.439116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gz2m\" (UniqueName: \"kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m\") pod \"collect-profiles-29521425-nmx6c\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.495695 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.504553 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:00 crc kubenswrapper[4865]: I0216 23:45:00.985188 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c"] Feb 16 23:45:01 crc kubenswrapper[4865]: I0216 23:45:01.142348 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" event={"ID":"99314662-6510-4b1d-82a1-d5420f99ed65","Type":"ContainerStarted","Data":"2e9561cf33d3b407c729f92a7803b0fcd7619f7a6a1dc0f8d3175d423d99be9e"} Feb 16 23:45:02 crc kubenswrapper[4865]: I0216 23:45:02.151622 4865 generic.go:334] "Generic (PLEG): container finished" podID="99314662-6510-4b1d-82a1-d5420f99ed65" containerID="82d6b500328e98e54d09d22cc0dc07da0b56ceecc1c9abf2c75927c92f2f5469" exitCode=0 Feb 16 23:45:02 crc kubenswrapper[4865]: I0216 23:45:02.151725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" event={"ID":"99314662-6510-4b1d-82a1-d5420f99ed65","Type":"ContainerDied","Data":"82d6b500328e98e54d09d22cc0dc07da0b56ceecc1c9abf2c75927c92f2f5469"} Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.574100 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.669945 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume\") pod \"99314662-6510-4b1d-82a1-d5420f99ed65\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.670020 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume\") pod \"99314662-6510-4b1d-82a1-d5420f99ed65\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.670244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gz2m\" (UniqueName: \"kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m\") pod \"99314662-6510-4b1d-82a1-d5420f99ed65\" (UID: \"99314662-6510-4b1d-82a1-d5420f99ed65\") " Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.671033 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume" (OuterVolumeSpecName: "config-volume") pod "99314662-6510-4b1d-82a1-d5420f99ed65" (UID: "99314662-6510-4b1d-82a1-d5420f99ed65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.676351 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m" (OuterVolumeSpecName: "kube-api-access-4gz2m") pod "99314662-6510-4b1d-82a1-d5420f99ed65" (UID: "99314662-6510-4b1d-82a1-d5420f99ed65"). InnerVolumeSpecName "kube-api-access-4gz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.676617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "99314662-6510-4b1d-82a1-d5420f99ed65" (UID: "99314662-6510-4b1d-82a1-d5420f99ed65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.772476 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gz2m\" (UniqueName: \"kubernetes.io/projected/99314662-6510-4b1d-82a1-d5420f99ed65-kube-api-access-4gz2m\") on node \"crc\" DevicePath \"\"" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.772520 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99314662-6510-4b1d-82a1-d5420f99ed65-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:45:03 crc kubenswrapper[4865]: I0216 23:45:03.772534 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/99314662-6510-4b1d-82a1-d5420f99ed65-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 23:45:04 crc kubenswrapper[4865]: I0216 23:45:04.171666 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" event={"ID":"99314662-6510-4b1d-82a1-d5420f99ed65","Type":"ContainerDied","Data":"2e9561cf33d3b407c729f92a7803b0fcd7619f7a6a1dc0f8d3175d423d99be9e"} Feb 16 23:45:04 crc kubenswrapper[4865]: I0216 23:45:04.171985 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9561cf33d3b407c729f92a7803b0fcd7619f7a6a1dc0f8d3175d423d99be9e" Feb 16 23:45:04 crc kubenswrapper[4865]: I0216 23:45:04.172048 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521425-nmx6c" Feb 16 23:45:04 crc kubenswrapper[4865]: I0216 23:45:04.666403 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx"] Feb 16 23:45:04 crc kubenswrapper[4865]: I0216 23:45:04.677361 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521380-jlcrx"] Feb 16 23:45:06 crc kubenswrapper[4865]: I0216 23:45:06.429168 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc821bc-0cee-45eb-a017-044f3fe176d1" path="/var/lib/kubelet/pods/bdc821bc-0cee-45eb-a017-044f3fe176d1/volumes" Feb 16 23:45:06 crc kubenswrapper[4865]: I0216 23:45:06.567147 4865 scope.go:117] "RemoveContainer" containerID="eb0473048b046bcba6e2f1f7806ac711bfe726e17e7488ef5af3021ec9086241" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.175016 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6xrxq_3d514685-83a5-4f3b-a89e-4490181e0109/kube-rbac-proxy/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.235267 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6xrxq_3d514685-83a5-4f3b-a89e-4490181e0109/controller/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.394655 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.599210 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.607155 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.607186 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.627604 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.805339 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.818322 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.833535 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:45:18 crc kubenswrapper[4865]: I0216 23:45:18.872464 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.068356 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.094603 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.101702 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/controller/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.139593 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.276574 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/frr-metrics/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.317726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/kube-rbac-proxy/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.375106 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/kube-rbac-proxy-frr/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.552061 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/reloader/0.log" Feb 16 23:45:19 crc kubenswrapper[4865]: I0216 23:45:19.616251 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-q8f6g_dde900ad-54aa-4b98-ac05-bbae1b0ce210/frr-k8s-webhook-server/0.log" Feb 16 23:45:20 crc kubenswrapper[4865]: I0216 23:45:20.048948 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7b7848d955-px2kr_10795d8f-8c08-4f6d-bc5d-4446befaa125/manager/0.log" Feb 16 23:45:20 crc kubenswrapper[4865]: I0216 23:45:20.261105 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-65947454b9-fm4f8_ca554199-8669-41a4-aac9-abe2657e896f/webhook-server/0.log" Feb 16 23:45:20 crc kubenswrapper[4865]: I0216 23:45:20.381125 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4twk_7cec452b-64c9-41d6-ae80-458c9c316981/kube-rbac-proxy/0.log" Feb 16 23:45:20 crc kubenswrapper[4865]: I0216 23:45:20.398397 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/frr/0.log" Feb 16 23:45:20 crc kubenswrapper[4865]: I0216 23:45:20.795321 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4twk_7cec452b-64c9-41d6-ae80-458c9c316981/speaker/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.365528 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.546520 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.607202 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.607727 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.743013 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.801387 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.859093 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/extract/0.log" Feb 16 23:45:34 crc kubenswrapper[4865]: I0216 23:45:34.966466 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.096717 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.105563 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.123713 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.269864 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.289399 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.461395 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.691993 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.722290 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.780691 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/registry-server/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.782528 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.926075 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:45:35 crc kubenswrapper[4865]: I0216 23:45:35.993043 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.149462 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.331095 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.361354 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.424520 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.456400 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/registry-server/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.544876 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.562190 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.603595 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/extract/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.703412 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pggsl_91cc827b-b0d7-49d3-8c52-99670081f857/marketplace-operator/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.817796 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:45:36 crc kubenswrapper[4865]: I0216 23:45:36.991671 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.017538 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.070656 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.186561 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.203525 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.335538 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/registry-server/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.411947 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.590042 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.594486 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.598486 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.770108 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:45:37 crc kubenswrapper[4865]: I0216 23:45:37.808382 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:45:38 crc kubenswrapper[4865]: I0216 23:45:38.112151 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/registry-server/0.log" Feb 16 23:46:01 crc kubenswrapper[4865]: E0216 23:46:01.120214 4865 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.53:39050->38.102.83.53:45983: write tcp 38.102.83.53:39050->38.102.83.53:45983: write: broken pipe Feb 16 23:46:45 crc kubenswrapper[4865]: I0216 23:46:45.664709 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:46:45 crc kubenswrapper[4865]: I0216 23:46:45.665347 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:47:15 crc kubenswrapper[4865]: I0216 23:47:15.664805 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:47:15 crc kubenswrapper[4865]: I0216 23:47:15.665811 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:47:20 crc kubenswrapper[4865]: I0216 23:47:20.633517 4865 generic.go:334] "Generic (PLEG): container finished" podID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerID="c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737" exitCode=0 Feb 16 23:47:20 crc kubenswrapper[4865]: I0216 23:47:20.633624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" event={"ID":"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8","Type":"ContainerDied","Data":"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737"} Feb 16 23:47:20 crc kubenswrapper[4865]: I0216 23:47:20.634647 4865 scope.go:117] "RemoveContainer" containerID="c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737" Feb 16 23:47:21 crc kubenswrapper[4865]: I0216 23:47:21.677959 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzbxn_must-gather-kgv9w_ad318a68-8b47-46a1-bbaa-f6d504c0d7e8/gather/0.log" Feb 16 23:47:29 crc kubenswrapper[4865]: I0216 23:47:29.697426 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzbxn/must-gather-kgv9w"] Feb 16 23:47:29 crc kubenswrapper[4865]: I0216 23:47:29.698923 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="copy" containerID="cri-o://3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df" gracePeriod=2 Feb 16 23:47:29 crc kubenswrapper[4865]: I0216 23:47:29.719310 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzbxn/must-gather-kgv9w"] Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.113474 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzbxn_must-gather-kgv9w_ad318a68-8b47-46a1-bbaa-f6d504c0d7e8/copy/0.log" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.114084 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.309076 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output\") pod \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.309250 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqktb\" (UniqueName: \"kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb\") pod \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\" (UID: \"ad318a68-8b47-46a1-bbaa-f6d504c0d7e8\") " Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.316711 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb" (OuterVolumeSpecName: "kube-api-access-lqktb") pod "ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" (UID: "ad318a68-8b47-46a1-bbaa-f6d504c0d7e8"). InnerVolumeSpecName "kube-api-access-lqktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.412847 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqktb\" (UniqueName: \"kubernetes.io/projected/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-kube-api-access-lqktb\") on node \"crc\" DevicePath \"\"" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.499248 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" (UID: "ad318a68-8b47-46a1-bbaa-f6d504c0d7e8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.515447 4865 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.753399 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzbxn_must-gather-kgv9w_ad318a68-8b47-46a1-bbaa-f6d504c0d7e8/copy/0.log" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.754293 4865 generic.go:334] "Generic (PLEG): container finished" podID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerID="3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df" exitCode=143 Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.754348 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzbxn/must-gather-kgv9w" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.754356 4865 scope.go:117] "RemoveContainer" containerID="3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.784685 4865 scope.go:117] "RemoveContainer" containerID="c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.890158 4865 scope.go:117] "RemoveContainer" containerID="3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df" Feb 16 23:47:30 crc kubenswrapper[4865]: E0216 23:47:30.890640 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df\": container with ID starting with 3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df not found: ID does not exist" containerID="3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.890691 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df"} err="failed to get container status \"3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df\": rpc error: code = NotFound desc = could not find container \"3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df\": container with ID starting with 3faa9916aee064950152aad5d257cc34d4ba5e82a43c73263f15b6cbfdd5c9df not found: ID does not exist" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.890724 4865 scope.go:117] "RemoveContainer" containerID="c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737" Feb 16 23:47:30 crc kubenswrapper[4865]: E0216 23:47:30.890986 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737\": container with ID starting with c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737 not found: ID does not exist" containerID="c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737" Feb 16 23:47:30 crc kubenswrapper[4865]: I0216 23:47:30.891030 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737"} err="failed to get container status \"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737\": rpc error: code = NotFound desc = could not find container \"c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737\": container with ID starting with c82889c11a70ba3aa346422a85b0e416e871ba8eb4a8ae8cafd68b4c36d78737 not found: ID does not exist" Feb 16 23:47:32 crc kubenswrapper[4865]: I0216 23:47:32.423668 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" path="/var/lib/kubelet/pods/ad318a68-8b47-46a1-bbaa-f6d504c0d7e8/volumes" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.664776 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.665560 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.665636 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.666872 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.666973 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" gracePeriod=600 Feb 16 23:47:45 crc kubenswrapper[4865]: E0216 23:47:45.789196 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.905053 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" exitCode=0 Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.905143 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591"} Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.905473 4865 scope.go:117] "RemoveContainer" containerID="cafc9fd94d172ab87eed495bd76d317e286c005153ef40f1a74890d70b77675b" Feb 16 23:47:45 crc kubenswrapper[4865]: I0216 23:47:45.906473 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:47:45 crc kubenswrapper[4865]: E0216 23:47:45.906926 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:47:59 crc kubenswrapper[4865]: I0216 23:47:59.414867 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:47:59 crc kubenswrapper[4865]: E0216 23:47:59.416057 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:48:14 crc kubenswrapper[4865]: I0216 23:48:14.414777 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:48:14 crc kubenswrapper[4865]: E0216 23:48:14.415695 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:48:26 crc kubenswrapper[4865]: I0216 23:48:26.414767 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:48:26 crc kubenswrapper[4865]: E0216 23:48:26.415737 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:48:41 crc kubenswrapper[4865]: I0216 23:48:41.414483 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:48:41 crc kubenswrapper[4865]: E0216 23:48:41.415561 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:48:56 crc kubenswrapper[4865]: I0216 23:48:56.417488 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:48:56 crc kubenswrapper[4865]: E0216 23:48:56.418784 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.496682 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:48:58 crc kubenswrapper[4865]: E0216 23:48:58.497762 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="copy" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.497788 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="copy" Feb 16 23:48:58 crc kubenswrapper[4865]: E0216 23:48:58.497817 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="gather" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.497830 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="gather" Feb 16 23:48:58 crc kubenswrapper[4865]: E0216 23:48:58.497853 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99314662-6510-4b1d-82a1-d5420f99ed65" containerName="collect-profiles" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.497868 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="99314662-6510-4b1d-82a1-d5420f99ed65" containerName="collect-profiles" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.498248 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="99314662-6510-4b1d-82a1-d5420f99ed65" containerName="collect-profiles" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.498560 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="gather" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.498596 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad318a68-8b47-46a1-bbaa-f6d504c0d7e8" containerName="copy" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.501138 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.549549 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.549719 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.549801 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz79m\" (UniqueName: \"kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.574588 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.651652 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz79m\" (UniqueName: \"kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.651765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.651924 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.652303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.652418 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.673116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz79m\" (UniqueName: \"kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m\") pod \"community-operators-nkntm\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:58 crc kubenswrapper[4865]: I0216 23:48:58.879619 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:48:59 crc kubenswrapper[4865]: I0216 23:48:59.412712 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:48:59 crc kubenswrapper[4865]: I0216 23:48:59.735319 4865 generic.go:334] "Generic (PLEG): container finished" podID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerID="f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e" exitCode=0 Feb 16 23:48:59 crc kubenswrapper[4865]: I0216 23:48:59.735364 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerDied","Data":"f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e"} Feb 16 23:48:59 crc kubenswrapper[4865]: I0216 23:48:59.735393 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerStarted","Data":"5a0cef338645c7b7c693a1bb937978d30bc7ffbded5eefdb636535be2ffa4a58"} Feb 16 23:48:59 crc kubenswrapper[4865]: I0216 23:48:59.739857 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:49:01 crc kubenswrapper[4865]: I0216 23:49:01.768588 4865 generic.go:334] "Generic (PLEG): container finished" podID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerID="1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9" exitCode=0 Feb 16 23:49:01 crc kubenswrapper[4865]: I0216 23:49:01.768678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerDied","Data":"1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9"} Feb 16 23:49:02 crc kubenswrapper[4865]: I0216 23:49:02.781330 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerStarted","Data":"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df"} Feb 16 23:49:02 crc kubenswrapper[4865]: I0216 23:49:02.808227 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nkntm" podStartSLOduration=2.264881043 podStartE2EDuration="4.808184426s" podCreationTimestamp="2026-02-16 23:48:58 +0000 UTC" firstStartedPulling="2026-02-16 23:48:59.739539508 +0000 UTC m=+3780.063246469" lastFinishedPulling="2026-02-16 23:49:02.282842881 +0000 UTC m=+3782.606549852" observedRunningTime="2026-02-16 23:49:02.806842488 +0000 UTC m=+3783.130549459" watchObservedRunningTime="2026-02-16 23:49:02.808184426 +0000 UTC m=+3783.131891377" Feb 16 23:49:08 crc kubenswrapper[4865]: I0216 23:49:08.880703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:08 crc kubenswrapper[4865]: I0216 23:49:08.882472 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:08 crc kubenswrapper[4865]: I0216 23:49:08.950425 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:09 crc kubenswrapper[4865]: I0216 23:49:09.937323 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:10 crc kubenswrapper[4865]: I0216 23:49:10.000551 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:49:10 crc kubenswrapper[4865]: I0216 23:49:10.423372 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:49:10 crc kubenswrapper[4865]: E0216 23:49:10.424148 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:49:11 crc kubenswrapper[4865]: I0216 23:49:11.874603 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nkntm" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="registry-server" containerID="cri-o://2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df" gracePeriod=2 Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.394146 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.557485 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content\") pod \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.557827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities\") pod \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.557994 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz79m\" (UniqueName: \"kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m\") pod \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\" (UID: \"4618188a-4e26-4a44-a6de-4c20ebcb13b9\") " Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.560332 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities" (OuterVolumeSpecName: "utilities") pod "4618188a-4e26-4a44-a6de-4c20ebcb13b9" (UID: "4618188a-4e26-4a44-a6de-4c20ebcb13b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.561657 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.581552 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m" (OuterVolumeSpecName: "kube-api-access-vz79m") pod "4618188a-4e26-4a44-a6de-4c20ebcb13b9" (UID: "4618188a-4e26-4a44-a6de-4c20ebcb13b9"). InnerVolumeSpecName "kube-api-access-vz79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.614953 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4618188a-4e26-4a44-a6de-4c20ebcb13b9" (UID: "4618188a-4e26-4a44-a6de-4c20ebcb13b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.663359 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz79m\" (UniqueName: \"kubernetes.io/projected/4618188a-4e26-4a44-a6de-4c20ebcb13b9-kube-api-access-vz79m\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.663433 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4618188a-4e26-4a44-a6de-4c20ebcb13b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.885151 4865 generic.go:334] "Generic (PLEG): container finished" podID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerID="2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df" exitCode=0 Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.885187 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerDied","Data":"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df"} Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.885246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nkntm" event={"ID":"4618188a-4e26-4a44-a6de-4c20ebcb13b9","Type":"ContainerDied","Data":"5a0cef338645c7b7c693a1bb937978d30bc7ffbded5eefdb636535be2ffa4a58"} Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.885266 4865 scope.go:117] "RemoveContainer" containerID="2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.885344 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nkntm" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.915445 4865 scope.go:117] "RemoveContainer" containerID="1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9" Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.945342 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.965976 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nkntm"] Feb 16 23:49:12 crc kubenswrapper[4865]: I0216 23:49:12.973158 4865 scope.go:117] "RemoveContainer" containerID="f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.008751 4865 scope.go:117] "RemoveContainer" containerID="2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df" Feb 16 23:49:13 crc kubenswrapper[4865]: E0216 23:49:13.009200 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df\": container with ID starting with 2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df not found: ID does not exist" containerID="2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.009231 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df"} err="failed to get container status \"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df\": rpc error: code = NotFound desc = could not find container \"2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df\": container with ID starting with 2f049c0285051c1814045be2f577b3755a93d1d5383571a656f568f70dcdd1df not found: ID does not exist" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.009256 4865 scope.go:117] "RemoveContainer" containerID="1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9" Feb 16 23:49:13 crc kubenswrapper[4865]: E0216 23:49:13.010001 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9\": container with ID starting with 1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9 not found: ID does not exist" containerID="1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.010026 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9"} err="failed to get container status \"1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9\": rpc error: code = NotFound desc = could not find container \"1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9\": container with ID starting with 1209cd76ed11e912ee6212a83fe9714fd4f744f69a606dc963a3dbf75309e4d9 not found: ID does not exist" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.010043 4865 scope.go:117] "RemoveContainer" containerID="f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e" Feb 16 23:49:13 crc kubenswrapper[4865]: E0216 23:49:13.010400 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e\": container with ID starting with f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e not found: ID does not exist" containerID="f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e" Feb 16 23:49:13 crc kubenswrapper[4865]: I0216 23:49:13.010424 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e"} err="failed to get container status \"f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e\": rpc error: code = NotFound desc = could not find container \"f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e\": container with ID starting with f8279d825a66ce106468296d5fdb05655fa3a3016631afb1a833fcac1cc4524e not found: ID does not exist" Feb 16 23:49:14 crc kubenswrapper[4865]: I0216 23:49:14.446129 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" path="/var/lib/kubelet/pods/4618188a-4e26-4a44-a6de-4c20ebcb13b9/volumes" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.014128 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:15 crc kubenswrapper[4865]: E0216 23:49:15.015567 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="extract-content" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.015609 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="extract-content" Feb 16 23:49:15 crc kubenswrapper[4865]: E0216 23:49:15.015657 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="registry-server" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.015669 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="registry-server" Feb 16 23:49:15 crc kubenswrapper[4865]: E0216 23:49:15.015690 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="extract-utilities" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.015701 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="extract-utilities" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.015971 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4618188a-4e26-4a44-a6de-4c20ebcb13b9" containerName="registry-server" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.017984 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.039030 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.109381 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5q5v\" (UniqueName: \"kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.109446 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.109547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.211685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.211837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5q5v\" (UniqueName: \"kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.211895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.212304 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.212416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.244271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5q5v\" (UniqueName: \"kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v\") pod \"redhat-operators-wkkn9\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.360971 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:15 crc kubenswrapper[4865]: I0216 23:49:15.851557 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:15 crc kubenswrapper[4865]: W0216 23:49:15.855626 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod163dc135_a573_4268_b15b_12de477be5e4.slice/crio-bc9a6b41b04840712d677dd208c5ff23791e0395c7a9aadbb0611a0a12da67bf WatchSource:0}: Error finding container bc9a6b41b04840712d677dd208c5ff23791e0395c7a9aadbb0611a0a12da67bf: Status 404 returned error can't find the container with id bc9a6b41b04840712d677dd208c5ff23791e0395c7a9aadbb0611a0a12da67bf Feb 16 23:49:16 crc kubenswrapper[4865]: I0216 23:49:16.497303 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerStarted","Data":"bc9a6b41b04840712d677dd208c5ff23791e0395c7a9aadbb0611a0a12da67bf"} Feb 16 23:49:17 crc kubenswrapper[4865]: I0216 23:49:17.490343 4865 generic.go:334] "Generic (PLEG): container finished" podID="163dc135-a573-4268-b15b-12de477be5e4" containerID="81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e" exitCode=0 Feb 16 23:49:17 crc kubenswrapper[4865]: I0216 23:49:17.490595 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerDied","Data":"81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e"} Feb 16 23:49:19 crc kubenswrapper[4865]: I0216 23:49:19.514553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerStarted","Data":"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738"} Feb 16 23:49:20 crc kubenswrapper[4865]: I0216 23:49:20.533629 4865 generic.go:334] "Generic (PLEG): container finished" podID="163dc135-a573-4268-b15b-12de477be5e4" containerID="ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738" exitCode=0 Feb 16 23:49:20 crc kubenswrapper[4865]: I0216 23:49:20.533669 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerDied","Data":"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738"} Feb 16 23:49:21 crc kubenswrapper[4865]: I0216 23:49:21.546853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerStarted","Data":"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a"} Feb 16 23:49:21 crc kubenswrapper[4865]: I0216 23:49:21.574635 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkkn9" podStartSLOduration=4.090917297 podStartE2EDuration="7.574617472s" podCreationTimestamp="2026-02-16 23:49:14 +0000 UTC" firstStartedPulling="2026-02-16 23:49:17.493002352 +0000 UTC m=+3797.816709333" lastFinishedPulling="2026-02-16 23:49:20.976702517 +0000 UTC m=+3801.300409508" observedRunningTime="2026-02-16 23:49:21.571136434 +0000 UTC m=+3801.894843415" watchObservedRunningTime="2026-02-16 23:49:21.574617472 +0000 UTC m=+3801.898324443" Feb 16 23:49:25 crc kubenswrapper[4865]: I0216 23:49:25.361600 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:25 crc kubenswrapper[4865]: I0216 23:49:25.362436 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:25 crc kubenswrapper[4865]: I0216 23:49:25.415327 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:49:25 crc kubenswrapper[4865]: E0216 23:49:25.416334 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:49:26 crc kubenswrapper[4865]: I0216 23:49:26.408852 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkkn9" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="registry-server" probeResult="failure" output=< Feb 16 23:49:26 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:49:26 crc kubenswrapper[4865]: > Feb 16 23:49:35 crc kubenswrapper[4865]: I0216 23:49:35.425387 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:35 crc kubenswrapper[4865]: I0216 23:49:35.491770 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:35 crc kubenswrapper[4865]: I0216 23:49:35.670637 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:36 crc kubenswrapper[4865]: I0216 23:49:36.722702 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkkn9" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="registry-server" containerID="cri-o://3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a" gracePeriod=2 Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.268028 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.397209 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content\") pod \"163dc135-a573-4268-b15b-12de477be5e4\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.397435 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5q5v\" (UniqueName: \"kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v\") pod \"163dc135-a573-4268-b15b-12de477be5e4\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.397768 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities\") pod \"163dc135-a573-4268-b15b-12de477be5e4\" (UID: \"163dc135-a573-4268-b15b-12de477be5e4\") " Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.398555 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities" (OuterVolumeSpecName: "utilities") pod "163dc135-a573-4268-b15b-12de477be5e4" (UID: "163dc135-a573-4268-b15b-12de477be5e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.402385 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v" (OuterVolumeSpecName: "kube-api-access-w5q5v") pod "163dc135-a573-4268-b15b-12de477be5e4" (UID: "163dc135-a573-4268-b15b-12de477be5e4"). InnerVolumeSpecName "kube-api-access-w5q5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.500250 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.500328 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5q5v\" (UniqueName: \"kubernetes.io/projected/163dc135-a573-4268-b15b-12de477be5e4-kube-api-access-w5q5v\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.553350 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "163dc135-a573-4268-b15b-12de477be5e4" (UID: "163dc135-a573-4268-b15b-12de477be5e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.601973 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/163dc135-a573-4268-b15b-12de477be5e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.736083 4865 generic.go:334] "Generic (PLEG): container finished" podID="163dc135-a573-4268-b15b-12de477be5e4" containerID="3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a" exitCode=0 Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.736151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerDied","Data":"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a"} Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.736217 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkkn9" event={"ID":"163dc135-a573-4268-b15b-12de477be5e4","Type":"ContainerDied","Data":"bc9a6b41b04840712d677dd208c5ff23791e0395c7a9aadbb0611a0a12da67bf"} Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.736275 4865 scope.go:117] "RemoveContainer" containerID="3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.737063 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkkn9" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.767302 4865 scope.go:117] "RemoveContainer" containerID="ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.794913 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.804126 4865 scope.go:117] "RemoveContainer" containerID="81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.821911 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkkn9"] Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.858027 4865 scope.go:117] "RemoveContainer" containerID="3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a" Feb 16 23:49:37 crc kubenswrapper[4865]: E0216 23:49:37.858599 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a\": container with ID starting with 3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a not found: ID does not exist" containerID="3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.858694 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a"} err="failed to get container status \"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a\": rpc error: code = NotFound desc = could not find container \"3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a\": container with ID starting with 3a646ada097a50899b5d81596454f16b523db69d81b1caad0e9c8c373c43951a not found: ID does not exist" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.858741 4865 scope.go:117] "RemoveContainer" containerID="ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738" Feb 16 23:49:37 crc kubenswrapper[4865]: E0216 23:49:37.859173 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738\": container with ID starting with ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738 not found: ID does not exist" containerID="ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.859245 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738"} err="failed to get container status \"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738\": rpc error: code = NotFound desc = could not find container \"ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738\": container with ID starting with ca64ff09b83f65eec728ce07a630a5ccfa43b972a71bcc0bfbd55fa21d69f738 not found: ID does not exist" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.859299 4865 scope.go:117] "RemoveContainer" containerID="81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e" Feb 16 23:49:37 crc kubenswrapper[4865]: E0216 23:49:37.859682 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e\": container with ID starting with 81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e not found: ID does not exist" containerID="81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e" Feb 16 23:49:37 crc kubenswrapper[4865]: I0216 23:49:37.859723 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e"} err="failed to get container status \"81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e\": rpc error: code = NotFound desc = could not find container \"81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e\": container with ID starting with 81bda6d36111809a563eb15fb26de1e01a3500ca5bf12b280e78ee5824e4d57e not found: ID does not exist" Feb 16 23:49:38 crc kubenswrapper[4865]: I0216 23:49:38.415164 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:49:38 crc kubenswrapper[4865]: E0216 23:49:38.415655 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:49:38 crc kubenswrapper[4865]: I0216 23:49:38.424124 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163dc135-a573-4268-b15b-12de477be5e4" path="/var/lib/kubelet/pods/163dc135-a573-4268-b15b-12de477be5e4/volumes" Feb 16 23:49:53 crc kubenswrapper[4865]: I0216 23:49:53.415517 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:49:53 crc kubenswrapper[4865]: E0216 23:49:53.416677 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:50:05 crc kubenswrapper[4865]: I0216 23:50:05.415383 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:50:05 crc kubenswrapper[4865]: E0216 23:50:05.416187 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:50:19 crc kubenswrapper[4865]: I0216 23:50:19.415325 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:50:19 crc kubenswrapper[4865]: E0216 23:50:19.416182 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.877435 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zg27t/must-gather-5fkkq"] Feb 16 23:50:22 crc kubenswrapper[4865]: E0216 23:50:22.878375 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="registry-server" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.878389 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="registry-server" Feb 16 23:50:22 crc kubenswrapper[4865]: E0216 23:50:22.878409 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="extract-utilities" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.878417 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="extract-utilities" Feb 16 23:50:22 crc kubenswrapper[4865]: E0216 23:50:22.878440 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="extract-content" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.878448 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="extract-content" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.878666 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="163dc135-a573-4268-b15b-12de477be5e4" containerName="registry-server" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.879656 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.881315 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zg27t"/"default-dockercfg-95wpl" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.882033 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zg27t"/"openshift-service-ca.crt" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.883144 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zg27t"/"kube-root-ca.crt" Feb 16 23:50:22 crc kubenswrapper[4865]: I0216 23:50:22.886508 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zg27t/must-gather-5fkkq"] Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.049432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.049732 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25slf\" (UniqueName: \"kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.151610 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.151700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25slf\" (UniqueName: \"kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.152374 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.184891 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25slf\" (UniqueName: \"kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf\") pod \"must-gather-5fkkq\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.195518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:50:23 crc kubenswrapper[4865]: I0216 23:50:23.741015 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zg27t/must-gather-5fkkq"] Feb 16 23:50:24 crc kubenswrapper[4865]: I0216 23:50:24.266855 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/must-gather-5fkkq" event={"ID":"e15920f0-41af-4f98-8b7d-5e40f97c7708","Type":"ContainerStarted","Data":"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332"} Feb 16 23:50:24 crc kubenswrapper[4865]: I0216 23:50:24.267215 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/must-gather-5fkkq" event={"ID":"e15920f0-41af-4f98-8b7d-5e40f97c7708","Type":"ContainerStarted","Data":"3952e8811c99cc0f8184651129faffcc445f1a78cc308302e5b5b49b675372ed"} Feb 16 23:50:25 crc kubenswrapper[4865]: I0216 23:50:25.296565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/must-gather-5fkkq" event={"ID":"e15920f0-41af-4f98-8b7d-5e40f97c7708","Type":"ContainerStarted","Data":"a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c"} Feb 16 23:50:25 crc kubenswrapper[4865]: I0216 23:50:25.335981 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zg27t/must-gather-5fkkq" podStartSLOduration=3.335954669 podStartE2EDuration="3.335954669s" podCreationTimestamp="2026-02-16 23:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 23:50:25.321861231 +0000 UTC m=+3865.645568222" watchObservedRunningTime="2026-02-16 23:50:25.335954669 +0000 UTC m=+3865.659661670" Feb 16 23:50:26 crc kubenswrapper[4865]: E0216 23:50:26.700004 4865 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.53:52880->38.102.83.53:45983: write tcp 38.102.83.53:52880->38.102.83.53:45983: write: broken pipe Feb 16 23:50:26 crc kubenswrapper[4865]: E0216 23:50:26.717038 4865 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.53:52886->38.102.83.53:45983: read tcp 38.102.83.53:52886->38.102.83.53:45983: read: connection reset by peer Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.627071 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zg27t/crc-debug-fmgmx"] Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.628716 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.743057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4xz\" (UniqueName: \"kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.743109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.844862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4xz\" (UniqueName: \"kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.845204 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.845413 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.870193 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4xz\" (UniqueName: \"kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz\") pod \"crc-debug-fmgmx\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: I0216 23:50:27.953554 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:50:27 crc kubenswrapper[4865]: W0216 23:50:27.995621 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod288562cf_53ac_4391_9e76_c56fd67afc53.slice/crio-a30e1ebff42db53be7c8f538609b4125cae2845f22dbd266b043f15514ace0c7 WatchSource:0}: Error finding container a30e1ebff42db53be7c8f538609b4125cae2845f22dbd266b043f15514ace0c7: Status 404 returned error can't find the container with id a30e1ebff42db53be7c8f538609b4125cae2845f22dbd266b043f15514ace0c7 Feb 16 23:50:28 crc kubenswrapper[4865]: I0216 23:50:28.323643 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" event={"ID":"288562cf-53ac-4391-9e76-c56fd67afc53","Type":"ContainerStarted","Data":"d190bc49555bbb14321caea8299370afa97192c62b085441dc69a8f7f0877708"} Feb 16 23:50:28 crc kubenswrapper[4865]: I0216 23:50:28.323930 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" event={"ID":"288562cf-53ac-4391-9e76-c56fd67afc53","Type":"ContainerStarted","Data":"a30e1ebff42db53be7c8f538609b4125cae2845f22dbd266b043f15514ace0c7"} Feb 16 23:50:34 crc kubenswrapper[4865]: I0216 23:50:34.414536 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:50:34 crc kubenswrapper[4865]: E0216 23:50:34.415300 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:50:47 crc kubenswrapper[4865]: I0216 23:50:47.414872 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:50:47 crc kubenswrapper[4865]: E0216 23:50:47.415580 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:50:58 crc kubenswrapper[4865]: I0216 23:50:58.414936 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:50:58 crc kubenswrapper[4865]: E0216 23:50:58.416816 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:51:00 crc kubenswrapper[4865]: I0216 23:51:00.617086 4865 generic.go:334] "Generic (PLEG): container finished" podID="288562cf-53ac-4391-9e76-c56fd67afc53" containerID="d190bc49555bbb14321caea8299370afa97192c62b085441dc69a8f7f0877708" exitCode=0 Feb 16 23:51:00 crc kubenswrapper[4865]: I0216 23:51:00.617180 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" event={"ID":"288562cf-53ac-4391-9e76-c56fd67afc53","Type":"ContainerDied","Data":"d190bc49555bbb14321caea8299370afa97192c62b085441dc69a8f7f0877708"} Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.744802 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.821100 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-fmgmx"] Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.827630 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-fmgmx"] Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.851907 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host\") pod \"288562cf-53ac-4391-9e76-c56fd67afc53\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.852049 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4xz\" (UniqueName: \"kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz\") pod \"288562cf-53ac-4391-9e76-c56fd67afc53\" (UID: \"288562cf-53ac-4391-9e76-c56fd67afc53\") " Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.853135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host" (OuterVolumeSpecName: "host") pod "288562cf-53ac-4391-9e76-c56fd67afc53" (UID: "288562cf-53ac-4391-9e76-c56fd67afc53"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.870926 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz" (OuterVolumeSpecName: "kube-api-access-6d4xz") pod "288562cf-53ac-4391-9e76-c56fd67afc53" (UID: "288562cf-53ac-4391-9e76-c56fd67afc53"). InnerVolumeSpecName "kube-api-access-6d4xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.953756 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d4xz\" (UniqueName: \"kubernetes.io/projected/288562cf-53ac-4391-9e76-c56fd67afc53-kube-api-access-6d4xz\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:01 crc kubenswrapper[4865]: I0216 23:51:01.953996 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/288562cf-53ac-4391-9e76-c56fd67afc53-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:02 crc kubenswrapper[4865]: I0216 23:51:02.424911 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288562cf-53ac-4391-9e76-c56fd67afc53" path="/var/lib/kubelet/pods/288562cf-53ac-4391-9e76-c56fd67afc53/volumes" Feb 16 23:51:02 crc kubenswrapper[4865]: I0216 23:51:02.635345 4865 scope.go:117] "RemoveContainer" containerID="d190bc49555bbb14321caea8299370afa97192c62b085441dc69a8f7f0877708" Feb 16 23:51:02 crc kubenswrapper[4865]: I0216 23:51:02.635533 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-fmgmx" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.196516 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zg27t/crc-debug-rmgrt"] Feb 16 23:51:03 crc kubenswrapper[4865]: E0216 23:51:03.197256 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288562cf-53ac-4391-9e76-c56fd67afc53" containerName="container-00" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.197275 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="288562cf-53ac-4391-9e76-c56fd67afc53" containerName="container-00" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.197822 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="288562cf-53ac-4391-9e76-c56fd67afc53" containerName="container-00" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.198721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.276335 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.276758 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csmd\" (UniqueName: \"kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.379168 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.379398 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csmd\" (UniqueName: \"kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.379412 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.401635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csmd\" (UniqueName: \"kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd\") pod \"crc-debug-rmgrt\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.517834 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:03 crc kubenswrapper[4865]: I0216 23:51:03.646553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" event={"ID":"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78","Type":"ContainerStarted","Data":"7553dcf70a2f324bc63e247a9fb0f756914475226195626e588d255c55b4ca10"} Feb 16 23:51:04 crc kubenswrapper[4865]: I0216 23:51:04.659111 4865 generic.go:334] "Generic (PLEG): container finished" podID="c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" containerID="948bdce0fa1e0cdaaa43317a22030442a55e25510ecbd585d62a41e4c6ab362b" exitCode=0 Feb 16 23:51:04 crc kubenswrapper[4865]: I0216 23:51:04.659179 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" event={"ID":"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78","Type":"ContainerDied","Data":"948bdce0fa1e0cdaaa43317a22030442a55e25510ecbd585d62a41e4c6ab362b"} Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.097866 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-rmgrt"] Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.105965 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-rmgrt"] Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.751595 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.824093 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2csmd\" (UniqueName: \"kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd\") pod \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.824167 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host\") pod \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\" (UID: \"c7d75bfc-a4da-4fa1-9ec3-96b57e853a78\") " Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.824784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host" (OuterVolumeSpecName: "host") pod "c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" (UID: "c7d75bfc-a4da-4fa1-9ec3-96b57e853a78"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.829827 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd" (OuterVolumeSpecName: "kube-api-access-2csmd") pod "c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" (UID: "c7d75bfc-a4da-4fa1-9ec3-96b57e853a78"). InnerVolumeSpecName "kube-api-access-2csmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.926129 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2csmd\" (UniqueName: \"kubernetes.io/projected/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-kube-api-access-2csmd\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:05 crc kubenswrapper[4865]: I0216 23:51:05.926160 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.424041 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" path="/var/lib/kubelet/pods/c7d75bfc-a4da-4fa1-9ec3-96b57e853a78/volumes" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.443378 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zg27t/crc-debug-9hx8n"] Feb 16 23:51:06 crc kubenswrapper[4865]: E0216 23:51:06.443768 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" containerName="container-00" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.443786 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" containerName="container-00" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.443973 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d75bfc-a4da-4fa1-9ec3-96b57e853a78" containerName="container-00" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.446596 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.537508 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.537576 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zm2\" (UniqueName: \"kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.640548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.640602 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zm2\" (UniqueName: \"kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.640752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.666451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zm2\" (UniqueName: \"kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2\") pod \"crc-debug-9hx8n\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.675689 4865 scope.go:117] "RemoveContainer" containerID="948bdce0fa1e0cdaaa43317a22030442a55e25510ecbd585d62a41e4c6ab362b" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.675902 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-rmgrt" Feb 16 23:51:06 crc kubenswrapper[4865]: I0216 23:51:06.763196 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:06 crc kubenswrapper[4865]: W0216 23:51:06.788990 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e70f186_362b_4bd2_8334_84819825116a.slice/crio-7ee89ecd0cfdcd8b7678af16f83dc832d62a4f456866402496c844fea25dafed WatchSource:0}: Error finding container 7ee89ecd0cfdcd8b7678af16f83dc832d62a4f456866402496c844fea25dafed: Status 404 returned error can't find the container with id 7ee89ecd0cfdcd8b7678af16f83dc832d62a4f456866402496c844fea25dafed Feb 16 23:51:07 crc kubenswrapper[4865]: I0216 23:51:07.685486 4865 generic.go:334] "Generic (PLEG): container finished" podID="1e70f186-362b-4bd2-8334-84819825116a" containerID="d0da980998e932b2d72ecc8929bf145ae4397b74d88d25aa058cf3e1b7802082" exitCode=0 Feb 16 23:51:07 crc kubenswrapper[4865]: I0216 23:51:07.685567 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" event={"ID":"1e70f186-362b-4bd2-8334-84819825116a","Type":"ContainerDied","Data":"d0da980998e932b2d72ecc8929bf145ae4397b74d88d25aa058cf3e1b7802082"} Feb 16 23:51:07 crc kubenswrapper[4865]: I0216 23:51:07.685842 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" event={"ID":"1e70f186-362b-4bd2-8334-84819825116a","Type":"ContainerStarted","Data":"7ee89ecd0cfdcd8b7678af16f83dc832d62a4f456866402496c844fea25dafed"} Feb 16 23:51:07 crc kubenswrapper[4865]: I0216 23:51:07.722598 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-9hx8n"] Feb 16 23:51:07 crc kubenswrapper[4865]: I0216 23:51:07.729811 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zg27t/crc-debug-9hx8n"] Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.788364 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.879876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host\") pod \"1e70f186-362b-4bd2-8334-84819825116a\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.879940 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7zm2\" (UniqueName: \"kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2\") pod \"1e70f186-362b-4bd2-8334-84819825116a\" (UID: \"1e70f186-362b-4bd2-8334-84819825116a\") " Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.880416 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host" (OuterVolumeSpecName: "host") pod "1e70f186-362b-4bd2-8334-84819825116a" (UID: "1e70f186-362b-4bd2-8334-84819825116a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.898464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2" (OuterVolumeSpecName: "kube-api-access-w7zm2") pod "1e70f186-362b-4bd2-8334-84819825116a" (UID: "1e70f186-362b-4bd2-8334-84819825116a"). InnerVolumeSpecName "kube-api-access-w7zm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.982175 4865 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e70f186-362b-4bd2-8334-84819825116a-host\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:08 crc kubenswrapper[4865]: I0216 23:51:08.982219 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7zm2\" (UniqueName: \"kubernetes.io/projected/1e70f186-362b-4bd2-8334-84819825116a-kube-api-access-w7zm2\") on node \"crc\" DevicePath \"\"" Feb 16 23:51:09 crc kubenswrapper[4865]: I0216 23:51:09.704452 4865 scope.go:117] "RemoveContainer" containerID="d0da980998e932b2d72ecc8929bf145ae4397b74d88d25aa058cf3e1b7802082" Feb 16 23:51:09 crc kubenswrapper[4865]: I0216 23:51:09.704517 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/crc-debug-9hx8n" Feb 16 23:51:10 crc kubenswrapper[4865]: I0216 23:51:10.424620 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:51:10 crc kubenswrapper[4865]: E0216 23:51:10.425154 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:51:10 crc kubenswrapper[4865]: I0216 23:51:10.425272 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e70f186-362b-4bd2-8334-84819825116a" path="/var/lib/kubelet/pods/1e70f186-362b-4bd2-8334-84819825116a/volumes" Feb 16 23:51:23 crc kubenswrapper[4865]: I0216 23:51:23.414332 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:51:23 crc kubenswrapper[4865]: E0216 23:51:23.415085 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:51:34 crc kubenswrapper[4865]: I0216 23:51:34.415385 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:51:34 crc kubenswrapper[4865]: E0216 23:51:34.416531 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.528080 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54bd7477c8-zrrzr_8c454ac8-1c92-42d1-a889-6f42e4d73f86/barbican-api/0.log" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.716345 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54bd7477c8-zrrzr_8c454ac8-1c92-42d1-a889-6f42e4d73f86/barbican-api-log/0.log" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.792805 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6779c894-4z8tf_14d1a57c-7cda-4753-a6de-fe9a98f4fd02/barbican-keystone-listener/0.log" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.823317 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6779c894-4z8tf_14d1a57c-7cda-4753-a6de-fe9a98f4fd02/barbican-keystone-listener-log/0.log" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.991595 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c998ff9-ghm8t_6633f123-ac1f-4a25-b20d-0c0eda648f92/barbican-worker/0.log" Feb 16 23:51:41 crc kubenswrapper[4865]: I0216 23:51:41.996638 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c998ff9-ghm8t_6633f123-ac1f-4a25-b20d-0c0eda648f92/barbican-worker-log/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.140977 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-js45z_24da9b19-2d45-4f18-a79e-bf378e4ee44d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.207952 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/ceilometer-central-agent/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.291387 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/ceilometer-notification-agent/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.338316 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/proxy-httpd/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.398790 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8e6fcbf6-3f21-4134-9ace-bbbe418e9599/sg-core/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.528029 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5/cinder-api-log/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.556208 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce76a8fb-fb3b-4af8-a4aa-d8ae4e31a5c5/cinder-api/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.732644 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_04210f96-20a4-48af-b1cb-f7ea73adc9a3/probe/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.749163 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_04210f96-20a4-48af-b1cb-f7ea73adc9a3/cinder-scheduler/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.877120 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-p5tqh_4e39dd59-456f-42dd-bc53-254730e44297/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:42 crc kubenswrapper[4865]: I0216 23:51:42.971848 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-d4hv6_058417d9-13ea-48ba-8bf8-2cdf141c94b6/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.100038 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/init/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.249049 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/init/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.270182 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rwzxd_45600784-63ad-4273-ab6d-5732fc0988e6/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.350234 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-s47vk_e06861ae-60fd-47ad-8c55-82641a24d552/dnsmasq-dns/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.460761 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_443145e3-8ed2-4863-bde1-9b932b22ef00/glance-log/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.486900 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_443145e3-8ed2-4863-bde1-9b932b22ef00/glance-httpd/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.887524 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_60070db1-5a47-4b70-b318-46f3745677c5/glance-httpd/0.log" Feb 16 23:51:43 crc kubenswrapper[4865]: I0216 23:51:43.899795 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_60070db1-5a47-4b70-b318-46f3745677c5/glance-log/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.064208 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7ff854866d-9gv97_17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a/horizon/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.142697 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kgskk_9bd41f0a-9736-4ede-8d1f-5c39bda1db42/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.408260 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-blwr8_d3a477d8-8710-4da3-b229-8787e3787f46/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.449551 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7ff854866d-9gv97_17473ac0-4cf6-4751-ac0e-fd73a9fc2c7a/horizon-log/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.678114 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_381c66d6-4d83-453d-bb97-35888127917f/kube-state-metrics/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.806578 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66c88cfbc7-mhfsh_cd912ee6-bda4-4859-a70d-3f53ca61ba60/keystone-api/0.log" Feb 16 23:51:44 crc kubenswrapper[4865]: I0216 23:51:44.876021 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vwzrl_de77d0a7-2fdd-48d9-a2ba-827deafc0437/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:45 crc kubenswrapper[4865]: I0216 23:51:45.173127 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f9dbdcc7-p5njv_1828fcf9-f296-46f5-a15d-7280fe715721/neutron-api/0.log" Feb 16 23:51:45 crc kubenswrapper[4865]: I0216 23:51:45.221869 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84f9dbdcc7-p5njv_1828fcf9-f296-46f5-a15d-7280fe715721/neutron-httpd/0.log" Feb 16 23:51:45 crc kubenswrapper[4865]: I0216 23:51:45.453781 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-252l2_ea192f95-6e32-46e1-ac67-715417874376/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:45 crc kubenswrapper[4865]: I0216 23:51:45.915352 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a4b1542-e38f-4ebf-9ca9-028ced41d506/nova-api-log/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.056796 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf4d46ef-86f3-450e-9b46-a0ee9085e51d/nova-cell0-conductor-conductor/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.341814 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5a4b1542-e38f-4ebf-9ca9-028ced41d506/nova-api-api/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.342998 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e94e6b7d-55e6-4b25-9663-6cdc0440681f/nova-cell1-conductor-conductor/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.436165 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae0a9df0-5b8e-4ea5-a4db-25ae964f56f6/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.603186 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wd8vd_14a00f0e-5a36-481b-a8ad-78032cfa0616/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:46 crc kubenswrapper[4865]: I0216 23:51:46.755717 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_811614ef-6229-489e-8da4-e1d4b1a5d5fd/nova-metadata-log/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.057017 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/mysql-bootstrap/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.083928 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a0810695-85aa-432f-8a1d-f5bf69077393/nova-scheduler-scheduler/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.263113 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/mysql-bootstrap/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.307303 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_59ff0541-9e7b-4f6e-8dbb-af16f656abeb/galera/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.666699 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/mysql-bootstrap/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.816347 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/mysql-bootstrap/0.log" Feb 16 23:51:47 crc kubenswrapper[4865]: I0216 23:51:47.823617 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0ba121bd-0fd3-46b5-b719-f113e7afc99c/galera/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.015329 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_328cf5b9-9c5d-4cfa-ae62-1ab76d210788/openstackclient/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.048082 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_811614ef-6229-489e-8da4-e1d4b1a5d5fd/nova-metadata-metadata/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.110050 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cllwp_0934d4bc-f8b7-4fbb-9309-20826e6aa578/openstack-network-exporter/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.216497 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server-init/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.408009 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server-init/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.436135 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovs-vswitchd/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.446452 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vmd6x_395c2af4-48dc-44d3-bb74-ef2b3e024c62/ovsdb-server/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.623628 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-plt5q_abf5edf2-8442-4aca-b35b-051b9f366b9a/ovn-controller/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.740075 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cbbp2_1c70b630-7dee-4749-9903-9d0f2e3b9196/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.827084 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_485d0f59-bf5e-43d4-b35e-e4a40273a666/openstack-network-exporter/0.log" Feb 16 23:51:48 crc kubenswrapper[4865]: I0216 23:51:48.916918 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_485d0f59-bf5e-43d4-b35e-e4a40273a666/ovn-northd/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.028759 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_99094c44-3d04-4263-a6b7-efc49f5e0fa2/openstack-network-exporter/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.063190 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_99094c44-3d04-4263-a6b7-efc49f5e0fa2/ovsdbserver-nb/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.170052 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae2d74d5-cebc-4243-a288-d6d901192de7/openstack-network-exporter/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.266721 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae2d74d5-cebc-4243-a288-d6d901192de7/ovsdbserver-sb/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.408004 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5695f8dc4-jj7h5_51d7f054-f0b9-43fc-b704-ac61bd427bb0/placement-api/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.415760 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:51:49 crc kubenswrapper[4865]: E0216 23:51:49.415958 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.496972 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5695f8dc4-jj7h5_51d7f054-f0b9-43fc-b704-ac61bd427bb0/placement-log/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.617198 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/setup-container/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.868751 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/setup-container/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.904372 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/setup-container/0.log" Feb 16 23:51:49 crc kubenswrapper[4865]: I0216 23:51:49.907417 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6912835d-d862-4295-9a6c-67deb30cbfba/rabbitmq/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.066545 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/setup-container/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.117318 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-s867f_c2f56f0d-1a38-4756-b13f-e961a66b7594/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.160012 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2e2b8953-a55e-40c8-974f-a76a1352fbfb/rabbitmq/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.302853 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-6crtq_82582c93-5f30-417e-a5f1-62038c6f8000/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.446100 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-22ttt_1bbe0349-2def-4238-880b-5cd6ed9e0413/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.505630 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lwkkt_86d1001f-6633-4b05-8a8f-cee820027d08/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.659508 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wdfb7_15b21ee4-d297-4297-9752-c0642717510e/ssh-known-hosts-edpm-deployment/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.899431 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b5cb7cc4c-8d58d_a7143f0f-06af-4d75-960a-2488e9b131bc/proxy-server/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.935160 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gq4bx_342162e1-dfde-4ad8-b1e6-0a4afc9dbdf3/swift-ring-rebalance/0.log" Feb 16 23:51:50 crc kubenswrapper[4865]: I0216 23:51:50.948737 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b5cb7cc4c-8d58d_a7143f0f-06af-4d75-960a-2488e9b131bc/proxy-httpd/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.116075 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-auditor/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.155020 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-replicator/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.156769 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-reaper/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.492953 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/account-server/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.537204 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-auditor/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.571177 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-replicator/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.582789 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-server/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.698807 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/container-updater/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.747553 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-auditor/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.796132 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-expirer/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.808997 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-replicator/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.905393 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-server/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.949129 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/object-updater/0.log" Feb 16 23:51:51 crc kubenswrapper[4865]: I0216 23:51:51.964452 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/rsync/0.log" Feb 16 23:51:52 crc kubenswrapper[4865]: I0216 23:51:52.032887 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_34486574-e35d-4674-a0b3-57d122050e66/swift-recon-cron/0.log" Feb 16 23:51:52 crc kubenswrapper[4865]: I0216 23:51:52.197913 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8dkwj_56a9e58a-8161-4d27-96d4-1459ec03b3ed/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:51:52 crc kubenswrapper[4865]: I0216 23:51:52.386456 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_dc785498-c658-47ed-8329-0e8c81c771be/tempest-tests-tempest-tests-runner/0.log" Feb 16 23:51:52 crc kubenswrapper[4865]: I0216 23:51:52.406190 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b3290302-9cc6-4e19-8492-1179e4163169/test-operator-logs-container/0.log" Feb 16 23:51:52 crc kubenswrapper[4865]: I0216 23:51:52.596732 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-bplzx_9cb0e39e-0d5d-4758-a44e-06867bdf08da/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 16 23:52:00 crc kubenswrapper[4865]: I0216 23:52:00.445734 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:52:00 crc kubenswrapper[4865]: E0216 23:52:00.449434 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:52:01 crc kubenswrapper[4865]: I0216 23:52:01.082413 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5568f4b1-9ca1-4de9-9355-ffc7b0281375/memcached/0.log" Feb 16 23:52:15 crc kubenswrapper[4865]: I0216 23:52:15.414569 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:52:15 crc kubenswrapper[4865]: E0216 23:52:15.415601 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.244929 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.388668 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.413958 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.425934 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.580453 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/pull/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.586640 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/extract/0.log" Feb 16 23:52:21 crc kubenswrapper[4865]: I0216 23:52:21.605622 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4ckkmg_7cf141a0-2b74-4eb1-99e2-80774839ccd6/util/0.log" Feb 16 23:52:22 crc kubenswrapper[4865]: I0216 23:52:22.008534 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-5ntcr_829ee3ed-5827-46ee-8399-f0b82ffa4d1d/manager/0.log" Feb 16 23:52:22 crc kubenswrapper[4865]: I0216 23:52:22.355382 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-kvmr9_61ac3013-99f7-4aef-b85d-8675044accc6/manager/0.log" Feb 16 23:52:22 crc kubenswrapper[4865]: I0216 23:52:22.435014 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-jmmxl_f1b2a884-8e78-47ac-9c45-7861a81e02d4/manager/0.log" Feb 16 23:52:22 crc kubenswrapper[4865]: I0216 23:52:22.653764 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-vc4dc_73e02b9a-66d8-4fb4-bc3d-13610563b6e4/manager/0.log" Feb 16 23:52:23 crc kubenswrapper[4865]: I0216 23:52:23.384879 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mt6fh_812a9f63-a231-495c-9474-0c60929fabff/manager/0.log" Feb 16 23:52:23 crc kubenswrapper[4865]: I0216 23:52:23.680273 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wl4zd_dc7842ab-52e5-4223-8b2a-ab09641bf297/manager/0.log" Feb 16 23:52:23 crc kubenswrapper[4865]: I0216 23:52:23.922573 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-msxb8_3a4e76a0-aa8b-4ee8-b7a8-dc43a376c4ec/manager/0.log" Feb 16 23:52:23 crc kubenswrapper[4865]: I0216 23:52:23.997090 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-stf5r_68b414dd-a0c6-488a-b253-1a3f477cb7a8/manager/0.log" Feb 16 23:52:24 crc kubenswrapper[4865]: I0216 23:52:24.132933 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-phdd6_dded450f-3a37-48b0-84fc-1de3c64c1954/manager/0.log" Feb 16 23:52:24 crc kubenswrapper[4865]: I0216 23:52:24.280067 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-hw4fs_a1759f72-1644-42e2-9b67-01478800870b/manager/0.log" Feb 16 23:52:24 crc kubenswrapper[4865]: I0216 23:52:24.524231 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-wk47p_f2e7b18d-0e13-4ef4-a4e2-d10b5f55763b/manager/0.log" Feb 16 23:52:24 crc kubenswrapper[4865]: I0216 23:52:24.791070 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-xzpc9_3be11752-93fd-4edc-b100-0bfd29f599e8/manager/0.log" Feb 16 23:52:24 crc kubenswrapper[4865]: I0216 23:52:24.977953 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cts9lz_a9614d13-aca5-4ffa-9cc1-dd8767e11ac4/manager/0.log" Feb 16 23:52:25 crc kubenswrapper[4865]: I0216 23:52:25.644849 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7d7c89f976-vxpzk_5c43d211-62d0-403c-90d5-00c0bfcfa692/operator/0.log" Feb 16 23:52:25 crc kubenswrapper[4865]: I0216 23:52:25.861261 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ndhqn_4adfbea3-c2d3-45a2-8858-8a1f867ebf5b/registry-server/0.log" Feb 16 23:52:26 crc kubenswrapper[4865]: I0216 23:52:26.176753 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-946kc_60e0dd0a-0055-45ec-8a4c-f0c23cd214b6/manager/0.log" Feb 16 23:52:26 crc kubenswrapper[4865]: I0216 23:52:26.406485 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-9tnrt_196fc76c-2c5d-45ec-8106-4d0a3382d16e/manager/0.log" Feb 16 23:52:26 crc kubenswrapper[4865]: I0216 23:52:26.590995 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zxt7q_f0d444ee-7bd9-40ed-ab3a-766aa716336c/operator/0.log" Feb 16 23:52:26 crc kubenswrapper[4865]: I0216 23:52:26.825969 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s9vk2_5d77ae74-7238-4c9f-8ae1-33064d8824c2/manager/0.log" Feb 16 23:52:27 crc kubenswrapper[4865]: I0216 23:52:27.123443 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-dz8t2_da795bac-53b5-415b-9297-26e5502fceb8/manager/0.log" Feb 16 23:52:27 crc kubenswrapper[4865]: I0216 23:52:27.292778 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-29v4v_21f8cf30-0215-4501-af0f-ff1220d4252b/manager/0.log" Feb 16 23:52:27 crc kubenswrapper[4865]: I0216 23:52:27.441855 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-m8rmj_2b4f33b1-b5a3-4935-8036-deb97cfedfe7/manager/0.log" Feb 16 23:52:27 crc kubenswrapper[4865]: I0216 23:52:27.470687 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-m62lh_1cfcc69c-1d21-4b1e-894d-d3ae72c39513/manager/0.log" Feb 16 23:52:27 crc kubenswrapper[4865]: I0216 23:52:27.532391 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85988dbd5c-sb7sh_24704625-9cce-4f47-847c-ab4d95d3adb1/manager/0.log" Feb 16 23:52:29 crc kubenswrapper[4865]: I0216 23:52:29.414267 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:52:29 crc kubenswrapper[4865]: E0216 23:52:29.414734 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:52:32 crc kubenswrapper[4865]: I0216 23:52:32.154247 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-7nt98_a5025501-39c8-43ae-8b94-3a555517b1f7/manager/0.log" Feb 16 23:52:41 crc kubenswrapper[4865]: I0216 23:52:41.414822 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:52:41 crc kubenswrapper[4865]: E0216 23:52:41.415903 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:52:49 crc kubenswrapper[4865]: I0216 23:52:49.417890 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qcl4t_5cfba6b6-3d1e-49d9-902e-b3493e1ffc97/control-plane-machine-set-operator/0.log" Feb 16 23:52:49 crc kubenswrapper[4865]: I0216 23:52:49.608622 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-676jr_c4592f72-2b39-47bf-beed-e53bf3865b22/kube-rbac-proxy/0.log" Feb 16 23:52:49 crc kubenswrapper[4865]: I0216 23:52:49.641730 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-676jr_c4592f72-2b39-47bf-beed-e53bf3865b22/machine-api-operator/0.log" Feb 16 23:52:53 crc kubenswrapper[4865]: I0216 23:52:53.414479 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:52:55 crc kubenswrapper[4865]: I0216 23:52:55.615969 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070"} Feb 16 23:53:04 crc kubenswrapper[4865]: I0216 23:53:04.442624 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8wctr_bde79990-dee1-4694-bf0c-f569702b84c6/cert-manager-controller/0.log" Feb 16 23:53:05 crc kubenswrapper[4865]: I0216 23:53:05.202666 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nwktn_799f3815-d78f-449e-b798-63000e62d953/cert-manager-cainjector/0.log" Feb 16 23:53:05 crc kubenswrapper[4865]: I0216 23:53:05.206397 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pzb4b_bbc85b0c-aae5-4657-8c81-fed6b49e5d5d/cert-manager-webhook/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.137849 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5q9z6_bee98b00-b363-4ff6-986b-33b5086b8453/nmstate-console-plugin/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.279857 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pmm2s_212784bf-c832-42e4-92c0-b1c81994982f/nmstate-handler/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.303331 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4jnsb_1b2d9b3b-4c11-4bae-9930-68b45a15ba52/kube-rbac-proxy/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.551230 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4jnsb_1b2d9b3b-4c11-4bae-9930-68b45a15ba52/nmstate-metrics/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.656312 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-hj5j9_5c652e54-0a32-41f0-844b-4f00cdb36ec3/nmstate-operator/0.log" Feb 16 23:53:20 crc kubenswrapper[4865]: I0216 23:53:20.773146 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-7fpp6_1ab27e8f-8d04-461e-8726-1ca46394c9b6/nmstate-webhook/0.log" Feb 16 23:53:52 crc kubenswrapper[4865]: I0216 23:53:52.853881 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6xrxq_3d514685-83a5-4f3b-a89e-4490181e0109/kube-rbac-proxy/0.log" Feb 16 23:53:52 crc kubenswrapper[4865]: I0216 23:53:52.921044 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-6xrxq_3d514685-83a5-4f3b-a89e-4490181e0109/controller/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.072135 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.251167 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.265355 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.291717 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.304176 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.508471 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.543259 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.543590 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.569781 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.702948 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-metrics/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.711823 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-reloader/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.748793 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/cp-frr-files/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.766957 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/controller/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.886903 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/frr-metrics/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.946168 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/kube-rbac-proxy/0.log" Feb 16 23:53:53 crc kubenswrapper[4865]: I0216 23:53:53.991017 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/kube-rbac-proxy-frr/0.log" Feb 16 23:53:54 crc kubenswrapper[4865]: I0216 23:53:54.145694 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/reloader/0.log" Feb 16 23:53:54 crc kubenswrapper[4865]: I0216 23:53:54.178961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-q8f6g_dde900ad-54aa-4b98-ac05-bbae1b0ce210/frr-k8s-webhook-server/0.log" Feb 16 23:53:54 crc kubenswrapper[4865]: I0216 23:53:54.463704 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7b7848d955-px2kr_10795d8f-8c08-4f6d-bc5d-4446befaa125/manager/0.log" Feb 16 23:53:54 crc kubenswrapper[4865]: I0216 23:53:54.602525 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-65947454b9-fm4f8_ca554199-8669-41a4-aac9-abe2657e896f/webhook-server/0.log" Feb 16 23:53:54 crc kubenswrapper[4865]: I0216 23:53:54.730597 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4twk_7cec452b-64c9-41d6-ae80-458c9c316981/kube-rbac-proxy/0.log" Feb 16 23:53:55 crc kubenswrapper[4865]: I0216 23:53:55.267877 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dbc9s_e1365192-a9fe-4c70-8118-7e76620b9c8c/frr/0.log" Feb 16 23:53:55 crc kubenswrapper[4865]: I0216 23:53:55.274376 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k4twk_7cec452b-64c9-41d6-ae80-458c9c316981/speaker/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.165936 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.606376 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.674167 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.715790 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.881555 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/util/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.888117 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/pull/0.log" Feb 16 23:54:11 crc kubenswrapper[4865]: I0216 23:54:11.893872 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136zxlv_f619952d-3fa5-48e1-a477-f4cbfb893bc1/extract/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.036011 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.199309 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.214943 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.216645 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.425159 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-content/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.429604 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/extract-utilities/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.711117 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.875432 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.887296 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7pqqn_41e68cd1-b151-4f99-b70f-43aced8e8b6d/registry-server/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.903827 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:54:12 crc kubenswrapper[4865]: I0216 23:54:12.954232 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.149167 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-content/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.170005 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/extract-utilities/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.445776 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.631803 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.693841 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.741072 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.874353 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pvbx9_d5d5e784-ef08-461d-87c1-f7c1fbe0dcce/registry-server/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.932432 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/pull/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.936321 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/extract/0.log" Feb 16 23:54:13 crc kubenswrapper[4865]: I0216 23:54:13.989665 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecag792d_ac3cdf44-e500-4a8d-ba2d-d43a02f67bad/util/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.136368 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pggsl_91cc827b-b0d7-49d3-8c52-99670081f857/marketplace-operator/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.175461 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.391712 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.419154 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.422504 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.606694 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-utilities/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.670251 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/extract-content/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.723853 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-44d2j_f1e60e96-48c4-4e5e-9561-1bc4ca0aa959/registry-server/0.log" Feb 16 23:54:14 crc kubenswrapper[4865]: I0216 23:54:14.725382 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.180421 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.218689 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.222837 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.386961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-content/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.393334 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/extract-utilities/0.log" Feb 16 23:54:15 crc kubenswrapper[4865]: I0216 23:54:15.857584 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q6rgp_7ec6baab-71df-4145-92a7-98fa5a885810/registry-server/0.log" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.458039 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:18 crc kubenswrapper[4865]: E0216 23:54:18.459142 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e70f186-362b-4bd2-8334-84819825116a" containerName="container-00" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.459160 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e70f186-362b-4bd2-8334-84819825116a" containerName="container-00" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.459391 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e70f186-362b-4bd2-8334-84819825116a" containerName="container-00" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.461456 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.475758 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.624083 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.624158 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.624189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghbd\" (UniqueName: \"kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.662986 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.665627 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.672491 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.725616 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.725688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.725719 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghbd\" (UniqueName: \"kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.726105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.726206 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.752502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghbd\" (UniqueName: \"kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd\") pod \"certified-operators-rwvkn\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.793224 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.826933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6l4\" (UniqueName: \"kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.827157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.827344 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.929307 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.929401 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6l4\" (UniqueName: \"kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.929482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.929974 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.930530 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.951045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6l4\" (UniqueName: \"kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4\") pod \"redhat-marketplace-4jzcf\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:18 crc kubenswrapper[4865]: I0216 23:54:18.984244 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:19 crc kubenswrapper[4865]: I0216 23:54:19.292249 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:19 crc kubenswrapper[4865]: I0216 23:54:19.359531 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.436448 4865 generic.go:334] "Generic (PLEG): container finished" podID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerID="d24b71e777dd91cbbd1d55891c522b52e576046635a9e3e77fbb80e25086d0d2" exitCode=0 Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.440115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerDied","Data":"d24b71e777dd91cbbd1d55891c522b52e576046635a9e3e77fbb80e25086d0d2"} Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.440177 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerStarted","Data":"6c05fa25ca319d5498d041b08628209c27514c7b078678965bd2dd852ba0c353"} Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.440861 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.447897 4865 generic.go:334] "Generic (PLEG): container finished" podID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerID="adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b" exitCode=0 Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.447998 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerDied","Data":"adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b"} Feb 16 23:54:20 crc kubenswrapper[4865]: I0216 23:54:20.448075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerStarted","Data":"45f51892fc4b90fb963f5c7af035a8766f2347d9a70892d4fec9e1454884e50a"} Feb 16 23:54:21 crc kubenswrapper[4865]: I0216 23:54:21.460566 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerStarted","Data":"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352"} Feb 16 23:54:21 crc kubenswrapper[4865]: I0216 23:54:21.463805 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerStarted","Data":"e709513ea41d4e9f922a0308964b9647e89b47dd94498f5f8f540d14bd32835f"} Feb 16 23:54:22 crc kubenswrapper[4865]: I0216 23:54:22.483791 4865 generic.go:334] "Generic (PLEG): container finished" podID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerID="e709513ea41d4e9f922a0308964b9647e89b47dd94498f5f8f540d14bd32835f" exitCode=0 Feb 16 23:54:22 crc kubenswrapper[4865]: I0216 23:54:22.483922 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerDied","Data":"e709513ea41d4e9f922a0308964b9647e89b47dd94498f5f8f540d14bd32835f"} Feb 16 23:54:23 crc kubenswrapper[4865]: I0216 23:54:23.494989 4865 generic.go:334] "Generic (PLEG): container finished" podID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerID="7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352" exitCode=0 Feb 16 23:54:23 crc kubenswrapper[4865]: I0216 23:54:23.495193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerDied","Data":"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352"} Feb 16 23:54:23 crc kubenswrapper[4865]: I0216 23:54:23.498439 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerStarted","Data":"65a1605d511d894a60b5618712039b92e0aca54a70fcdb3af64b1366ba97de36"} Feb 16 23:54:23 crc kubenswrapper[4865]: I0216 23:54:23.550095 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jzcf" podStartSLOduration=3.121613603 podStartE2EDuration="5.550070823s" podCreationTimestamp="2026-02-16 23:54:18 +0000 UTC" firstStartedPulling="2026-02-16 23:54:20.440077326 +0000 UTC m=+4100.763784327" lastFinishedPulling="2026-02-16 23:54:22.868534586 +0000 UTC m=+4103.192241547" observedRunningTime="2026-02-16 23:54:23.544458985 +0000 UTC m=+4103.868165956" watchObservedRunningTime="2026-02-16 23:54:23.550070823 +0000 UTC m=+4103.873777794" Feb 16 23:54:24 crc kubenswrapper[4865]: I0216 23:54:24.511062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerStarted","Data":"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711"} Feb 16 23:54:24 crc kubenswrapper[4865]: I0216 23:54:24.560908 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rwvkn" podStartSLOduration=3.133665625 podStartE2EDuration="6.560884254s" podCreationTimestamp="2026-02-16 23:54:18 +0000 UTC" firstStartedPulling="2026-02-16 23:54:20.456093569 +0000 UTC m=+4100.779800540" lastFinishedPulling="2026-02-16 23:54:23.883312198 +0000 UTC m=+4104.207019169" observedRunningTime="2026-02-16 23:54:24.554663308 +0000 UTC m=+4104.878370269" watchObservedRunningTime="2026-02-16 23:54:24.560884254 +0000 UTC m=+4104.884591225" Feb 16 23:54:28 crc kubenswrapper[4865]: I0216 23:54:28.793576 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:28 crc kubenswrapper[4865]: I0216 23:54:28.794151 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:28 crc kubenswrapper[4865]: I0216 23:54:28.857184 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:28 crc kubenswrapper[4865]: I0216 23:54:28.986150 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:28 crc kubenswrapper[4865]: I0216 23:54:28.986196 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:29 crc kubenswrapper[4865]: I0216 23:54:29.047630 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:29 crc kubenswrapper[4865]: I0216 23:54:29.622828 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:30 crc kubenswrapper[4865]: I0216 23:54:30.119521 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:30 crc kubenswrapper[4865]: I0216 23:54:30.253171 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:31 crc kubenswrapper[4865]: I0216 23:54:31.573910 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4jzcf" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="registry-server" containerID="cri-o://65a1605d511d894a60b5618712039b92e0aca54a70fcdb3af64b1366ba97de36" gracePeriod=2 Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.467061 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.467727 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rwvkn" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="registry-server" containerID="cri-o://e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711" gracePeriod=2 Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.590089 4865 generic.go:334] "Generic (PLEG): container finished" podID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerID="65a1605d511d894a60b5618712039b92e0aca54a70fcdb3af64b1366ba97de36" exitCode=0 Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.591667 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerDied","Data":"65a1605d511d894a60b5618712039b92e0aca54a70fcdb3af64b1366ba97de36"} Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.595264 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jzcf" event={"ID":"e58cb838-f39b-492c-8b5a-0ed50c874e08","Type":"ContainerDied","Data":"6c05fa25ca319d5498d041b08628209c27514c7b078678965bd2dd852ba0c353"} Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.595404 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c05fa25ca319d5498d041b08628209c27514c7b078678965bd2dd852ba0c353" Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.804460 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.905750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck6l4\" (UniqueName: \"kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4\") pod \"e58cb838-f39b-492c-8b5a-0ed50c874e08\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.905893 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities\") pod \"e58cb838-f39b-492c-8b5a-0ed50c874e08\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.905977 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content\") pod \"e58cb838-f39b-492c-8b5a-0ed50c874e08\" (UID: \"e58cb838-f39b-492c-8b5a-0ed50c874e08\") " Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.907393 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities" (OuterVolumeSpecName: "utilities") pod "e58cb838-f39b-492c-8b5a-0ed50c874e08" (UID: "e58cb838-f39b-492c-8b5a-0ed50c874e08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.914144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4" (OuterVolumeSpecName: "kube-api-access-ck6l4") pod "e58cb838-f39b-492c-8b5a-0ed50c874e08" (UID: "e58cb838-f39b-492c-8b5a-0ed50c874e08"). InnerVolumeSpecName "kube-api-access-ck6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.930108 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e58cb838-f39b-492c-8b5a-0ed50c874e08" (UID: "e58cb838-f39b-492c-8b5a-0ed50c874e08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:54:32 crc kubenswrapper[4865]: I0216 23:54:32.979748 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.007776 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hghbd\" (UniqueName: \"kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd\") pod \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.007817 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content\") pod \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.011031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities\") pod \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\" (UID: \"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3\") " Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.011824 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.011846 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck6l4\" (UniqueName: \"kubernetes.io/projected/e58cb838-f39b-492c-8b5a-0ed50c874e08-kube-api-access-ck6l4\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.011856 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58cb838-f39b-492c-8b5a-0ed50c874e08-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.012899 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities" (OuterVolumeSpecName: "utilities") pod "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" (UID: "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.015483 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd" (OuterVolumeSpecName: "kube-api-access-hghbd") pod "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" (UID: "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3"). InnerVolumeSpecName "kube-api-access-hghbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.067502 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" (UID: "fdaa5ab9-bb35-44d4-bb09-2ef7097979b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.113168 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.113448 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hghbd\" (UniqueName: \"kubernetes.io/projected/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-kube-api-access-hghbd\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.113458 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608195 4865 generic.go:334] "Generic (PLEG): container finished" podID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerID="e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711" exitCode=0 Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608364 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jzcf" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608365 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwvkn" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608428 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerDied","Data":"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711"} Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608468 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwvkn" event={"ID":"fdaa5ab9-bb35-44d4-bb09-2ef7097979b3","Type":"ContainerDied","Data":"45f51892fc4b90fb963f5c7af035a8766f2347d9a70892d4fec9e1454884e50a"} Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.608495 4865 scope.go:117] "RemoveContainer" containerID="e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.630895 4865 scope.go:117] "RemoveContainer" containerID="7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.669333 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.682271 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jzcf"] Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.685585 4865 scope.go:117] "RemoveContainer" containerID="adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.689746 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.698872 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rwvkn"] Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.728341 4865 scope.go:117] "RemoveContainer" containerID="e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711" Feb 16 23:54:33 crc kubenswrapper[4865]: E0216 23:54:33.728828 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711\": container with ID starting with e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711 not found: ID does not exist" containerID="e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.728890 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711"} err="failed to get container status \"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711\": rpc error: code = NotFound desc = could not find container \"e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711\": container with ID starting with e653272a8452a6c94c6e06457d0fbd9bb35afdd087f73c8d449d5ce4e85d1711 not found: ID does not exist" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.728918 4865 scope.go:117] "RemoveContainer" containerID="7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352" Feb 16 23:54:33 crc kubenswrapper[4865]: E0216 23:54:33.729240 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352\": container with ID starting with 7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352 not found: ID does not exist" containerID="7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.729261 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352"} err="failed to get container status \"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352\": rpc error: code = NotFound desc = could not find container \"7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352\": container with ID starting with 7b23ab397698461143ea5d41bf849cf986c316f0eac588c7dc7882ed7a14b352 not found: ID does not exist" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.729307 4865 scope.go:117] "RemoveContainer" containerID="adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b" Feb 16 23:54:33 crc kubenswrapper[4865]: E0216 23:54:33.729817 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b\": container with ID starting with adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b not found: ID does not exist" containerID="adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b" Feb 16 23:54:33 crc kubenswrapper[4865]: I0216 23:54:33.729838 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b"} err="failed to get container status \"adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b\": rpc error: code = NotFound desc = could not find container \"adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b\": container with ID starting with adc740b07dde2b9fe8a95e0641c3fcc6d6590f12d31dfece39d88d3534de0c5b not found: ID does not exist" Feb 16 23:54:34 crc kubenswrapper[4865]: I0216 23:54:34.429227 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" path="/var/lib/kubelet/pods/e58cb838-f39b-492c-8b5a-0ed50c874e08/volumes" Feb 16 23:54:34 crc kubenswrapper[4865]: I0216 23:54:34.430065 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" path="/var/lib/kubelet/pods/fdaa5ab9-bb35-44d4-bb09-2ef7097979b3/volumes" Feb 16 23:55:15 crc kubenswrapper[4865]: I0216 23:55:15.664839 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:55:15 crc kubenswrapper[4865]: I0216 23:55:15.665355 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:55:45 crc kubenswrapper[4865]: I0216 23:55:45.664891 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:55:45 crc kubenswrapper[4865]: I0216 23:55:45.665672 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:56:03 crc kubenswrapper[4865]: I0216 23:56:03.596681 4865 generic.go:334] "Generic (PLEG): container finished" podID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerID="f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332" exitCode=0 Feb 16 23:56:03 crc kubenswrapper[4865]: I0216 23:56:03.596816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zg27t/must-gather-5fkkq" event={"ID":"e15920f0-41af-4f98-8b7d-5e40f97c7708","Type":"ContainerDied","Data":"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332"} Feb 16 23:56:03 crc kubenswrapper[4865]: I0216 23:56:03.597863 4865 scope.go:117] "RemoveContainer" containerID="f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332" Feb 16 23:56:04 crc kubenswrapper[4865]: I0216 23:56:04.460441 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zg27t_must-gather-5fkkq_e15920f0-41af-4f98-8b7d-5e40f97c7708/gather/0.log" Feb 16 23:56:14 crc kubenswrapper[4865]: I0216 23:56:14.910128 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zg27t/must-gather-5fkkq"] Feb 16 23:56:14 crc kubenswrapper[4865]: I0216 23:56:14.911123 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zg27t/must-gather-5fkkq" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="copy" containerID="cri-o://a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c" gracePeriod=2 Feb 16 23:56:14 crc kubenswrapper[4865]: I0216 23:56:14.944025 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zg27t/must-gather-5fkkq"] Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.342624 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zg27t_must-gather-5fkkq_e15920f0-41af-4f98-8b7d-5e40f97c7708/copy/0.log" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.343629 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.499571 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25slf\" (UniqueName: \"kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf\") pod \"e15920f0-41af-4f98-8b7d-5e40f97c7708\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.499865 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output\") pod \"e15920f0-41af-4f98-8b7d-5e40f97c7708\" (UID: \"e15920f0-41af-4f98-8b7d-5e40f97c7708\") " Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.505244 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf" (OuterVolumeSpecName: "kube-api-access-25slf") pod "e15920f0-41af-4f98-8b7d-5e40f97c7708" (UID: "e15920f0-41af-4f98-8b7d-5e40f97c7708"). InnerVolumeSpecName "kube-api-access-25slf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.602164 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25slf\" (UniqueName: \"kubernetes.io/projected/e15920f0-41af-4f98-8b7d-5e40f97c7708-kube-api-access-25slf\") on node \"crc\" DevicePath \"\"" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.645849 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e15920f0-41af-4f98-8b7d-5e40f97c7708" (UID: "e15920f0-41af-4f98-8b7d-5e40f97c7708"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.664278 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.664564 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.664733 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.665916 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.666099 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070" gracePeriod=600 Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.704501 4865 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e15920f0-41af-4f98-8b7d-5e40f97c7708-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.738964 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zg27t_must-gather-5fkkq_e15920f0-41af-4f98-8b7d-5e40f97c7708/copy/0.log" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.739619 4865 generic.go:334] "Generic (PLEG): container finished" podID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerID="a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c" exitCode=143 Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.739681 4865 scope.go:117] "RemoveContainer" containerID="a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c" Feb 16 23:56:15 crc kubenswrapper[4865]: I0216 23:56:15.739826 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zg27t/must-gather-5fkkq" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.208036 4865 scope.go:117] "RemoveContainer" containerID="f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.344339 4865 scope.go:117] "RemoveContainer" containerID="a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c" Feb 16 23:56:16 crc kubenswrapper[4865]: E0216 23:56:16.344724 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c\": container with ID starting with a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c not found: ID does not exist" containerID="a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.344755 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c"} err="failed to get container status \"a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c\": rpc error: code = NotFound desc = could not find container \"a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c\": container with ID starting with a9fc70b0f17d633847a785d711c390cf411aaaefc413d45d6077bb4f892de00c not found: ID does not exist" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.344773 4865 scope.go:117] "RemoveContainer" containerID="f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332" Feb 16 23:56:16 crc kubenswrapper[4865]: E0216 23:56:16.345073 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332\": container with ID starting with f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332 not found: ID does not exist" containerID="f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.345097 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332"} err="failed to get container status \"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332\": rpc error: code = NotFound desc = could not find container \"f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332\": container with ID starting with f5f50a26ecbd95932c0598352e425b704b298ff36db335d12eb726711c346332 not found: ID does not exist" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.427153 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" path="/var/lib/kubelet/pods/e15920f0-41af-4f98-8b7d-5e40f97c7708/volumes" Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.751888 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070" exitCode=0 Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.751988 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070"} Feb 16 23:56:16 crc kubenswrapper[4865]: I0216 23:56:16.752061 4865 scope.go:117] "RemoveContainer" containerID="3383d89c28198e5a6982e852d6fe4d402704ec0637203631cb7c4abb6bef7591" Feb 16 23:56:17 crc kubenswrapper[4865]: I0216 23:56:17.767525 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerStarted","Data":"efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147"} Feb 16 23:58:45 crc kubenswrapper[4865]: I0216 23:58:45.664666 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:58:45 crc kubenswrapper[4865]: I0216 23:58:45.665307 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.136195 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.137690 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="copy" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.137719 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="copy" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.137753 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="extract-utilities" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.137769 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="extract-utilities" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.137974 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="gather" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.137991 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="gather" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.138037 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138053 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.138093 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="extract-utilities" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138109 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="extract-utilities" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.138150 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138166 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.138198 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="extract-content" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138214 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="extract-content" Feb 16 23:59:06 crc kubenswrapper[4865]: E0216 23:59:06.138254 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="extract-content" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138271 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="extract-content" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138692 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdaa5ab9-bb35-44d4-bb09-2ef7097979b3" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138730 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="gather" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138768 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e15920f0-41af-4f98-8b7d-5e40f97c7708" containerName="copy" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.138802 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58cb838-f39b-492c-8b5a-0ed50c874e08" containerName="registry-server" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.142438 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.150695 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.202813 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.203094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmx29\" (UniqueName: \"kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.203167 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.305713 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmx29\" (UniqueName: \"kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.306211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.306673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.306841 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.307351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.342684 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmx29\" (UniqueName: \"kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29\") pod \"community-operators-p5v57\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:06 crc kubenswrapper[4865]: I0216 23:59:06.476977 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:07 crc kubenswrapper[4865]: W0216 23:59:07.024201 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6f2f33_aa44_4474_b323_7a123fb483f8.slice/crio-b06140a481172d411c82f54c5ddc9c246f1ac92a4bd905f3953c58f65fef631b WatchSource:0}: Error finding container b06140a481172d411c82f54c5ddc9c246f1ac92a4bd905f3953c58f65fef631b: Status 404 returned error can't find the container with id b06140a481172d411c82f54c5ddc9c246f1ac92a4bd905f3953c58f65fef631b Feb 16 23:59:07 crc kubenswrapper[4865]: I0216 23:59:07.030586 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:07 crc kubenswrapper[4865]: I0216 23:59:07.612936 4865 generic.go:334] "Generic (PLEG): container finished" podID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerID="8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29" exitCode=0 Feb 16 23:59:07 crc kubenswrapper[4865]: I0216 23:59:07.615153 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerDied","Data":"8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29"} Feb 16 23:59:07 crc kubenswrapper[4865]: I0216 23:59:07.615680 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerStarted","Data":"b06140a481172d411c82f54c5ddc9c246f1ac92a4bd905f3953c58f65fef631b"} Feb 16 23:59:08 crc kubenswrapper[4865]: I0216 23:59:08.627766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerStarted","Data":"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837"} Feb 16 23:59:09 crc kubenswrapper[4865]: I0216 23:59:09.639056 4865 generic.go:334] "Generic (PLEG): container finished" podID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerID="21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837" exitCode=0 Feb 16 23:59:09 crc kubenswrapper[4865]: I0216 23:59:09.639474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerDied","Data":"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837"} Feb 16 23:59:11 crc kubenswrapper[4865]: I0216 23:59:11.666239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerStarted","Data":"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028"} Feb 16 23:59:11 crc kubenswrapper[4865]: I0216 23:59:11.693271 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5v57" podStartSLOduration=3.269368733 podStartE2EDuration="5.693249324s" podCreationTimestamp="2026-02-16 23:59:06 +0000 UTC" firstStartedPulling="2026-02-16 23:59:07.620800263 +0000 UTC m=+4387.944507264" lastFinishedPulling="2026-02-16 23:59:10.044680874 +0000 UTC m=+4390.368387855" observedRunningTime="2026-02-16 23:59:11.684053274 +0000 UTC m=+4392.007760235" watchObservedRunningTime="2026-02-16 23:59:11.693249324 +0000 UTC m=+4392.016956295" Feb 16 23:59:15 crc kubenswrapper[4865]: I0216 23:59:15.664312 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:59:15 crc kubenswrapper[4865]: I0216 23:59:15.664941 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:59:16 crc kubenswrapper[4865]: I0216 23:59:16.495517 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:16 crc kubenswrapper[4865]: I0216 23:59:16.495612 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:16 crc kubenswrapper[4865]: I0216 23:59:16.546799 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:16 crc kubenswrapper[4865]: I0216 23:59:16.762343 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:16 crc kubenswrapper[4865]: I0216 23:59:16.819473 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:18 crc kubenswrapper[4865]: I0216 23:59:18.736250 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5v57" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="registry-server" containerID="cri-o://c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028" gracePeriod=2 Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.218685 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.394842 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmx29\" (UniqueName: \"kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29\") pod \"2a6f2f33-aa44-4474-b323-7a123fb483f8\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.394962 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content\") pod \"2a6f2f33-aa44-4474-b323-7a123fb483f8\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.395069 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities\") pod \"2a6f2f33-aa44-4474-b323-7a123fb483f8\" (UID: \"2a6f2f33-aa44-4474-b323-7a123fb483f8\") " Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.396644 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities" (OuterVolumeSpecName: "utilities") pod "2a6f2f33-aa44-4474-b323-7a123fb483f8" (UID: "2a6f2f33-aa44-4474-b323-7a123fb483f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.429584 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29" (OuterVolumeSpecName: "kube-api-access-tmx29") pod "2a6f2f33-aa44-4474-b323-7a123fb483f8" (UID: "2a6f2f33-aa44-4474-b323-7a123fb483f8"). InnerVolumeSpecName "kube-api-access-tmx29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.462391 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a6f2f33-aa44-4474-b323-7a123fb483f8" (UID: "2a6f2f33-aa44-4474-b323-7a123fb483f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.497275 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.497329 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6f2f33-aa44-4474-b323-7a123fb483f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.497341 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmx29\" (UniqueName: \"kubernetes.io/projected/2a6f2f33-aa44-4474-b323-7a123fb483f8-kube-api-access-tmx29\") on node \"crc\" DevicePath \"\"" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.751384 4865 generic.go:334] "Generic (PLEG): container finished" podID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerID="c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028" exitCode=0 Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.751445 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerDied","Data":"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028"} Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.751523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5v57" event={"ID":"2a6f2f33-aa44-4474-b323-7a123fb483f8","Type":"ContainerDied","Data":"b06140a481172d411c82f54c5ddc9c246f1ac92a4bd905f3953c58f65fef631b"} Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.751554 4865 scope.go:117] "RemoveContainer" containerID="c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.751461 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5v57" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.793339 4865 scope.go:117] "RemoveContainer" containerID="21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.821955 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.845407 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5v57"] Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.846111 4865 scope.go:117] "RemoveContainer" containerID="8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.903659 4865 scope.go:117] "RemoveContainer" containerID="c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028" Feb 16 23:59:19 crc kubenswrapper[4865]: E0216 23:59:19.904236 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028\": container with ID starting with c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028 not found: ID does not exist" containerID="c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.904312 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028"} err="failed to get container status \"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028\": rpc error: code = NotFound desc = could not find container \"c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028\": container with ID starting with c8efedc277319df19b43af1bae2b0892001b71b813ce2c0c9494ee47a7192028 not found: ID does not exist" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.904363 4865 scope.go:117] "RemoveContainer" containerID="21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837" Feb 16 23:59:19 crc kubenswrapper[4865]: E0216 23:59:19.906011 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837\": container with ID starting with 21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837 not found: ID does not exist" containerID="21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.906045 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837"} err="failed to get container status \"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837\": rpc error: code = NotFound desc = could not find container \"21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837\": container with ID starting with 21ba917ca30d0093b683fb80baafc3a918300658369dc1893cfe28ebd7a2e837 not found: ID does not exist" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.906069 4865 scope.go:117] "RemoveContainer" containerID="8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29" Feb 16 23:59:19 crc kubenswrapper[4865]: E0216 23:59:19.906430 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29\": container with ID starting with 8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29 not found: ID does not exist" containerID="8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29" Feb 16 23:59:19 crc kubenswrapper[4865]: I0216 23:59:19.906457 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29"} err="failed to get container status \"8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29\": rpc error: code = NotFound desc = could not find container \"8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29\": container with ID starting with 8318c2d618242ac49585c5b522d328cdc51f9714818ddfc1a977b8ebb7d54a29 not found: ID does not exist" Feb 16 23:59:20 crc kubenswrapper[4865]: I0216 23:59:20.445419 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" path="/var/lib/kubelet/pods/2a6f2f33-aa44-4474-b323-7a123fb483f8/volumes" Feb 16 23:59:45 crc kubenswrapper[4865]: I0216 23:59:45.664251 4865 patch_prober.go:28] interesting pod/machine-config-daemon-7sl6f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 23:59:45 crc kubenswrapper[4865]: I0216 23:59:45.664855 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 23:59:45 crc kubenswrapper[4865]: I0216 23:59:45.664907 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" Feb 16 23:59:45 crc kubenswrapper[4865]: I0216 23:59:45.666266 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147"} pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 23:59:45 crc kubenswrapper[4865]: I0216 23:59:45.666390 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerName="machine-config-daemon" containerID="cri-o://efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147" gracePeriod=600 Feb 16 23:59:45 crc kubenswrapper[4865]: E0216 23:59:45.792595 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.074913 4865 generic.go:334] "Generic (PLEG): container finished" podID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" containerID="efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147" exitCode=0 Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.074953 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" event={"ID":"af5ee041-5763-4a28-9d12-7ba21bbb9dbc","Type":"ContainerDied","Data":"efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147"} Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.074984 4865 scope.go:117] "RemoveContainer" containerID="58aefa703623799d41feb8865996c3e1d396a388211338e1daea13013f302070" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.075528 4865 scope.go:117] "RemoveContainer" containerID="efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147" Feb 16 23:59:46 crc kubenswrapper[4865]: E0216 23:59:46.075799 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.956563 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 16 23:59:46 crc kubenswrapper[4865]: E0216 23:59:46.957745 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="extract-content" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.957779 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="extract-content" Feb 16 23:59:46 crc kubenswrapper[4865]: E0216 23:59:46.957827 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="extract-utilities" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.957845 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="extract-utilities" Feb 16 23:59:46 crc kubenswrapper[4865]: E0216 23:59:46.957929 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="registry-server" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.957952 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="registry-server" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.958621 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6f2f33-aa44-4474-b323-7a123fb483f8" containerName="registry-server" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.962214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:46 crc kubenswrapper[4865]: I0216 23:59:46.996075 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.105091 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhckl\" (UniqueName: \"kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.105400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.105533 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.207455 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.207615 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.207834 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhckl\" (UniqueName: \"kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.208019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.208198 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.228521 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhckl\" (UniqueName: \"kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl\") pod \"redhat-operators-kgc2g\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.301299 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:47 crc kubenswrapper[4865]: I0216 23:59:47.808470 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 16 23:59:48 crc kubenswrapper[4865]: I0216 23:59:48.094362 4865 generic.go:334] "Generic (PLEG): container finished" podID="930cf9b8-972c-4675-9840-ada5ccf6db38" containerID="3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302" exitCode=0 Feb 16 23:59:48 crc kubenswrapper[4865]: I0216 23:59:48.094412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerDied","Data":"3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302"} Feb 16 23:59:48 crc kubenswrapper[4865]: I0216 23:59:48.094467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerStarted","Data":"b94572a768f67438bf85e26b99161fa082ca11e354e83752facb3bfb68a732e9"} Feb 16 23:59:48 crc kubenswrapper[4865]: I0216 23:59:48.096888 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 23:59:50 crc kubenswrapper[4865]: I0216 23:59:50.118870 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerStarted","Data":"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554"} Feb 16 23:59:51 crc kubenswrapper[4865]: I0216 23:59:51.129987 4865 generic.go:334] "Generic (PLEG): container finished" podID="930cf9b8-972c-4675-9840-ada5ccf6db38" containerID="0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554" exitCode=0 Feb 16 23:59:51 crc kubenswrapper[4865]: I0216 23:59:51.130080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerDied","Data":"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554"} Feb 16 23:59:52 crc kubenswrapper[4865]: I0216 23:59:52.158629 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerStarted","Data":"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f"} Feb 16 23:59:52 crc kubenswrapper[4865]: I0216 23:59:52.180804 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kgc2g" podStartSLOduration=2.768325496 podStartE2EDuration="6.180789758s" podCreationTimestamp="2026-02-16 23:59:46 +0000 UTC" firstStartedPulling="2026-02-16 23:59:48.096581656 +0000 UTC m=+4428.420288617" lastFinishedPulling="2026-02-16 23:59:51.509045878 +0000 UTC m=+4431.832752879" observedRunningTime="2026-02-16 23:59:52.17731874 +0000 UTC m=+4432.501025711" watchObservedRunningTime="2026-02-16 23:59:52.180789758 +0000 UTC m=+4432.504496719" Feb 16 23:59:57 crc kubenswrapper[4865]: I0216 23:59:57.301927 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:57 crc kubenswrapper[4865]: I0216 23:59:57.304141 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 16 23:59:58 crc kubenswrapper[4865]: I0216 23:59:58.391518 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kgc2g" podUID="930cf9b8-972c-4675-9840-ada5ccf6db38" containerName="registry-server" probeResult="failure" output=< Feb 16 23:59:58 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Feb 16 23:59:58 crc kubenswrapper[4865]: > Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.226704 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-purge-29521440-2hd2z"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.229013 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.235990 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29521440-2cwnh"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.237547 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.240004 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.240619 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.240939 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.247530 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-purge-29521440-g4qcg"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.248757 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.258645 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.270048 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.271375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.273177 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.277887 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.283453 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-2cwnh"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.291032 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.318553 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-purge-29521440-2hd2z"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.344827 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-purge-29521440-g4qcg"] Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvm2\" (UniqueName: \"kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410618 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410696 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzk5\" (UniqueName: \"kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410823 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq92n\" (UniqueName: \"kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410867 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsmw4\" (UniqueName: \"kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.410982 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.411039 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.411198 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512720 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512822 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvm2\" (UniqueName: \"kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512904 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512932 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512966 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.512992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzk5\" (UniqueName: \"kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq92n\" (UniqueName: \"kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513104 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513140 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsmw4\" (UniqueName: \"kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513184 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.513210 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.516592 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.516672 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.516894 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.518315 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.520750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.520889 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.524221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.524407 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.524626 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.527699 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.531746 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.532665 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.535929 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.539572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzk5\" (UniqueName: \"kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5\") pod \"nova-cell1-db-purge-29521440-2hd2z\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.540610 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsmw4\" (UniqueName: \"kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4\") pod \"image-pruner-29521440-2cwnh\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.547298 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvm2\" (UniqueName: \"kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2\") pod \"collect-profiles-29521440-6lkmj\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.555630 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq92n\" (UniqueName: \"kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n\") pod \"nova-cell0-db-purge-29521440-g4qcg\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.580502 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.616564 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.622855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.625589 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.637488 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 00:00:00 crc kubenswrapper[4865]: I0217 00:00:00.646462 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.049751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-purge-29521440-2hd2z"] Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.132717 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-purge-29521440-g4qcg"] Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.261228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" event={"ID":"5a51fb87-5594-4dac-906c-60a02f79d3dc","Type":"ContainerStarted","Data":"9dc2b8fe84f9089cc8e658274dd9227920fdb02e8e725c1ac5c097212047e922"} Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.275737 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-2cwnh"] Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.283268 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj"] Feb 17 00:00:01 crc kubenswrapper[4865]: I0217 00:00:01.414782 4865 scope.go:117] "RemoveContainer" containerID="efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147" Feb 17 00:00:01 crc kubenswrapper[4865]: E0217 00:00:01.415225 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc" Feb 17 00:00:02 crc kubenswrapper[4865]: I0217 00:00:02.272580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" event={"ID":"95d05d65-a7a7-479b-8254-53a3d62adba5","Type":"ContainerStarted","Data":"304be969fd0e7f61fb53e826e05a96460fc1858984e3c2996b7c9568a8ac8457"} Feb 17 00:00:02 crc kubenswrapper[4865]: I0217 00:00:02.274354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" event={"ID":"2b7320ab-151f-488e-acee-e56bc6037a80","Type":"ContainerStarted","Data":"ced383bcc28a1a80a215e227c949449e96892b4e2466caa8ad04d284940d49f2"} Feb 17 00:00:02 crc kubenswrapper[4865]: I0217 00:00:02.275843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-2cwnh" event={"ID":"f8ae622b-ecf8-4911-9243-5de1c252e269","Type":"ContainerStarted","Data":"7602f4951bacaaf2ef309433db6b46a84b9b553fcef21f52ede874fdd19f0fc8"} Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.291949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" event={"ID":"5a51fb87-5594-4dac-906c-60a02f79d3dc","Type":"ContainerStarted","Data":"86cfc6c9a3e245d4274fe3994e0063650a9460618534f4cbe5a8b0157aac8224"} Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.299606 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" event={"ID":"95d05d65-a7a7-479b-8254-53a3d62adba5","Type":"ContainerStarted","Data":"6a7e7b1ed8b8bb642a345f61704089be355272aa50c6833278ed03374cb6dca3"} Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.303377 4865 generic.go:334] "Generic (PLEG): container finished" podID="2b7320ab-151f-488e-acee-e56bc6037a80" containerID="ecb3f17e83a230db65d008e7d55a5935d04cde2e08b15240a9bb85101b550dcf" exitCode=0 Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.303723 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" event={"ID":"2b7320ab-151f-488e-acee-e56bc6037a80","Type":"ContainerDied","Data":"ecb3f17e83a230db65d008e7d55a5935d04cde2e08b15240a9bb85101b550dcf"} Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.305489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-2cwnh" event={"ID":"f8ae622b-ecf8-4911-9243-5de1c252e269","Type":"ContainerStarted","Data":"f84633775a6d4504d5b7b98c4e9aa0f66e4cb9fad51c3eec485f133568374d59"} Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.328695 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" podStartSLOduration=3.328672387 podStartE2EDuration="3.328672387s" podCreationTimestamp="2026-02-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:00:03.321340339 +0000 UTC m=+4443.645047390" watchObservedRunningTime="2026-02-17 00:00:03.328672387 +0000 UTC m=+4443.652379348" Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.350350 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29521440-2cwnh" podStartSLOduration=3.350329699 podStartE2EDuration="3.350329699s" podCreationTimestamp="2026-02-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:00:03.338980728 +0000 UTC m=+4443.662687729" watchObservedRunningTime="2026-02-17 00:00:03.350329699 +0000 UTC m=+4443.674036660" Feb 17 00:00:03 crc kubenswrapper[4865]: I0217 00:00:03.410568 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" podStartSLOduration=3.410538052 podStartE2EDuration="3.410538052s" podCreationTimestamp="2026-02-17 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:00:03.381223203 +0000 UTC m=+4443.704930174" watchObservedRunningTime="2026-02-17 00:00:03.410538052 +0000 UTC m=+4443.734245033" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.315953 4865 generic.go:334] "Generic (PLEG): container finished" podID="f8ae622b-ecf8-4911-9243-5de1c252e269" containerID="f84633775a6d4504d5b7b98c4e9aa0f66e4cb9fad51c3eec485f133568374d59" exitCode=0 Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.317850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-2cwnh" event={"ID":"f8ae622b-ecf8-4911-9243-5de1c252e269","Type":"ContainerDied","Data":"f84633775a6d4504d5b7b98c4e9aa0f66e4cb9fad51c3eec485f133568374d59"} Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.698855 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.797852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvvm2\" (UniqueName: \"kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2\") pod \"2b7320ab-151f-488e-acee-e56bc6037a80\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.798234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume\") pod \"2b7320ab-151f-488e-acee-e56bc6037a80\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.798444 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume\") pod \"2b7320ab-151f-488e-acee-e56bc6037a80\" (UID: \"2b7320ab-151f-488e-acee-e56bc6037a80\") " Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.801070 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b7320ab-151f-488e-acee-e56bc6037a80" (UID: "2b7320ab-151f-488e-acee-e56bc6037a80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.810654 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2" (OuterVolumeSpecName: "kube-api-access-nvvm2") pod "2b7320ab-151f-488e-acee-e56bc6037a80" (UID: "2b7320ab-151f-488e-acee-e56bc6037a80"). InnerVolumeSpecName "kube-api-access-nvvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.810804 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b7320ab-151f-488e-acee-e56bc6037a80" (UID: "2b7320ab-151f-488e-acee-e56bc6037a80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.901105 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvvm2\" (UniqueName: \"kubernetes.io/projected/2b7320ab-151f-488e-acee-e56bc6037a80-kube-api-access-nvvm2\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.901365 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b7320ab-151f-488e-acee-e56bc6037a80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:04 crc kubenswrapper[4865]: I0217 00:00:04.901462 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b7320ab-151f-488e-acee-e56bc6037a80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.330229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" event={"ID":"2b7320ab-151f-488e-acee-e56bc6037a80","Type":"ContainerDied","Data":"ced383bcc28a1a80a215e227c949449e96892b4e2466caa8ad04d284940d49f2"} Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.330272 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced383bcc28a1a80a215e227c949449e96892b4e2466caa8ad04d284940d49f2" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.330326 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-6lkmj" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.674858 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.774476 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz"] Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.783138 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521395-8wjrz"] Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.818264 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca\") pod \"f8ae622b-ecf8-4911-9243-5de1c252e269\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.818388 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsmw4\" (UniqueName: \"kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4\") pod \"f8ae622b-ecf8-4911-9243-5de1c252e269\" (UID: \"f8ae622b-ecf8-4911-9243-5de1c252e269\") " Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.819439 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca" (OuterVolumeSpecName: "serviceca") pod "f8ae622b-ecf8-4911-9243-5de1c252e269" (UID: "f8ae622b-ecf8-4911-9243-5de1c252e269"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.840802 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4" (OuterVolumeSpecName: "kube-api-access-nsmw4") pod "f8ae622b-ecf8-4911-9243-5de1c252e269" (UID: "f8ae622b-ecf8-4911-9243-5de1c252e269"). InnerVolumeSpecName "kube-api-access-nsmw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.921134 4865 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8ae622b-ecf8-4911-9243-5de1c252e269-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:05 crc kubenswrapper[4865]: I0217 00:00:05.921370 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsmw4\" (UniqueName: \"kubernetes.io/projected/f8ae622b-ecf8-4911-9243-5de1c252e269-kube-api-access-nsmw4\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:06 crc kubenswrapper[4865]: I0217 00:00:06.341752 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-2cwnh" event={"ID":"f8ae622b-ecf8-4911-9243-5de1c252e269","Type":"ContainerDied","Data":"7602f4951bacaaf2ef309433db6b46a84b9b553fcef21f52ede874fdd19f0fc8"} Feb 17 00:00:06 crc kubenswrapper[4865]: I0217 00:00:06.341809 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7602f4951bacaaf2ef309433db6b46a84b9b553fcef21f52ede874fdd19f0fc8" Feb 17 00:00:06 crc kubenswrapper[4865]: I0217 00:00:06.343272 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-2cwnh" Feb 17 00:00:06 crc kubenswrapper[4865]: I0217 00:00:06.428562 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89334696-02d5-4418-9636-bd35a8581ab8" path="/var/lib/kubelet/pods/89334696-02d5-4418-9636-bd35a8581ab8/volumes" Feb 17 00:00:07 crc kubenswrapper[4865]: I0217 00:00:07.123288 4865 scope.go:117] "RemoveContainer" containerID="bf286e2c33332f2ab4309df9b20e22022f7ccd518170bd2a0014a36b6c1e4271" Feb 17 00:00:07 crc kubenswrapper[4865]: I0217 00:00:07.367352 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 17 00:00:07 crc kubenswrapper[4865]: I0217 00:00:07.450615 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 17 00:00:07 crc kubenswrapper[4865]: I0217 00:00:07.613073 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 17 00:00:08 crc kubenswrapper[4865]: I0217 00:00:08.370624 4865 generic.go:334] "Generic (PLEG): container finished" podID="95d05d65-a7a7-479b-8254-53a3d62adba5" containerID="6a7e7b1ed8b8bb642a345f61704089be355272aa50c6833278ed03374cb6dca3" exitCode=0 Feb 17 00:00:08 crc kubenswrapper[4865]: I0217 00:00:08.370784 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" event={"ID":"95d05d65-a7a7-479b-8254-53a3d62adba5","Type":"ContainerDied","Data":"6a7e7b1ed8b8bb642a345f61704089be355272aa50c6833278ed03374cb6dca3"} Feb 17 00:00:08 crc kubenswrapper[4865]: I0217 00:00:08.377679 4865 generic.go:334] "Generic (PLEG): container finished" podID="5a51fb87-5594-4dac-906c-60a02f79d3dc" containerID="86cfc6c9a3e245d4274fe3994e0063650a9460618534f4cbe5a8b0157aac8224" exitCode=0 Feb 17 00:00:08 crc kubenswrapper[4865]: I0217 00:00:08.378428 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" event={"ID":"5a51fb87-5594-4dac-906c-60a02f79d3dc","Type":"ContainerDied","Data":"86cfc6c9a3e245d4274fe3994e0063650a9460618534f4cbe5a8b0157aac8224"} Feb 17 00:00:09 crc kubenswrapper[4865]: I0217 00:00:09.390601 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kgc2g" podUID="930cf9b8-972c-4675-9840-ada5ccf6db38" containerName="registry-server" containerID="cri-o://b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f" gracePeriod=2 Feb 17 00:00:09 crc kubenswrapper[4865]: I0217 00:00:09.834176 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:09 crc kubenswrapper[4865]: I0217 00:00:09.935624 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:09 crc kubenswrapper[4865]: I0217 00:00:09.942242 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.002577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle\") pod \"5a51fb87-5594-4dac-906c-60a02f79d3dc\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.002858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts\") pod \"5a51fb87-5594-4dac-906c-60a02f79d3dc\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.002997 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data\") pod \"5a51fb87-5594-4dac-906c-60a02f79d3dc\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.003122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gzk5\" (UniqueName: \"kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5\") pod \"5a51fb87-5594-4dac-906c-60a02f79d3dc\" (UID: \"5a51fb87-5594-4dac-906c-60a02f79d3dc\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.011738 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5" (OuterVolumeSpecName: "kube-api-access-4gzk5") pod "5a51fb87-5594-4dac-906c-60a02f79d3dc" (UID: "5a51fb87-5594-4dac-906c-60a02f79d3dc"). InnerVolumeSpecName "kube-api-access-4gzk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.012611 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts" (OuterVolumeSpecName: "scripts") pod "5a51fb87-5594-4dac-906c-60a02f79d3dc" (UID: "5a51fb87-5594-4dac-906c-60a02f79d3dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.032965 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data" (OuterVolumeSpecName: "config-data") pod "5a51fb87-5594-4dac-906c-60a02f79d3dc" (UID: "5a51fb87-5594-4dac-906c-60a02f79d3dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.045093 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a51fb87-5594-4dac-906c-60a02f79d3dc" (UID: "5a51fb87-5594-4dac-906c-60a02f79d3dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104548 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content\") pod \"930cf9b8-972c-4675-9840-ada5ccf6db38\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104614 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq92n\" (UniqueName: \"kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n\") pod \"95d05d65-a7a7-479b-8254-53a3d62adba5\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104646 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data\") pod \"95d05d65-a7a7-479b-8254-53a3d62adba5\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104686 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities\") pod \"930cf9b8-972c-4675-9840-ada5ccf6db38\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104754 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts\") pod \"95d05d65-a7a7-479b-8254-53a3d62adba5\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104775 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhckl\" (UniqueName: \"kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl\") pod \"930cf9b8-972c-4675-9840-ada5ccf6db38\" (UID: \"930cf9b8-972c-4675-9840-ada5ccf6db38\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.104844 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle\") pod \"95d05d65-a7a7-479b-8254-53a3d62adba5\" (UID: \"95d05d65-a7a7-479b-8254-53a3d62adba5\") " Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.105503 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.105530 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.105544 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a51fb87-5594-4dac-906c-60a02f79d3dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.105556 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gzk5\" (UniqueName: \"kubernetes.io/projected/5a51fb87-5594-4dac-906c-60a02f79d3dc-kube-api-access-4gzk5\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.107625 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities" (OuterVolumeSpecName: "utilities") pod "930cf9b8-972c-4675-9840-ada5ccf6db38" (UID: "930cf9b8-972c-4675-9840-ada5ccf6db38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.109925 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n" (OuterVolumeSpecName: "kube-api-access-bq92n") pod "95d05d65-a7a7-479b-8254-53a3d62adba5" (UID: "95d05d65-a7a7-479b-8254-53a3d62adba5"). InnerVolumeSpecName "kube-api-access-bq92n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.110117 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl" (OuterVolumeSpecName: "kube-api-access-rhckl") pod "930cf9b8-972c-4675-9840-ada5ccf6db38" (UID: "930cf9b8-972c-4675-9840-ada5ccf6db38"). InnerVolumeSpecName "kube-api-access-rhckl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.110421 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts" (OuterVolumeSpecName: "scripts") pod "95d05d65-a7a7-479b-8254-53a3d62adba5" (UID: "95d05d65-a7a7-479b-8254-53a3d62adba5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.130309 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95d05d65-a7a7-479b-8254-53a3d62adba5" (UID: "95d05d65-a7a7-479b-8254-53a3d62adba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.150883 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data" (OuterVolumeSpecName: "config-data") pod "95d05d65-a7a7-479b-8254-53a3d62adba5" (UID: "95d05d65-a7a7-479b-8254-53a3d62adba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.207926 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq92n\" (UniqueName: \"kubernetes.io/projected/95d05d65-a7a7-479b-8254-53a3d62adba5-kube-api-access-bq92n\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.207984 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.208009 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.208030 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.208048 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhckl\" (UniqueName: \"kubernetes.io/projected/930cf9b8-972c-4675-9840-ada5ccf6db38-kube-api-access-rhckl\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.208065 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95d05d65-a7a7-479b-8254-53a3d62adba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.243246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "930cf9b8-972c-4675-9840-ada5ccf6db38" (UID: "930cf9b8-972c-4675-9840-ada5ccf6db38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.310424 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930cf9b8-972c-4675-9840-ada5ccf6db38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.400951 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.401000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-purge-29521440-g4qcg" event={"ID":"95d05d65-a7a7-479b-8254-53a3d62adba5","Type":"ContainerDied","Data":"304be969fd0e7f61fb53e826e05a96460fc1858984e3c2996b7c9568a8ac8457"} Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.401046 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="304be969fd0e7f61fb53e826e05a96460fc1858984e3c2996b7c9568a8ac8457" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.404202 4865 generic.go:334] "Generic (PLEG): container finished" podID="930cf9b8-972c-4675-9840-ada5ccf6db38" containerID="b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f" exitCode=0 Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.404403 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgc2g" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.404403 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerDied","Data":"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f"} Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.404467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgc2g" event={"ID":"930cf9b8-972c-4675-9840-ada5ccf6db38","Type":"ContainerDied","Data":"b94572a768f67438bf85e26b99161fa082ca11e354e83752facb3bfb68a732e9"} Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.404500 4865 scope.go:117] "RemoveContainer" containerID="b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.406246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" event={"ID":"5a51fb87-5594-4dac-906c-60a02f79d3dc","Type":"ContainerDied","Data":"9dc2b8fe84f9089cc8e658274dd9227920fdb02e8e725c1ac5c097212047e922"} Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.406272 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc2b8fe84f9089cc8e658274dd9227920fdb02e8e725c1ac5c097212047e922" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.406350 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-purge-29521440-2hd2z" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.449179 4865 scope.go:117] "RemoveContainer" containerID="0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.495684 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.495787 4865 scope.go:117] "RemoveContainer" containerID="3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.511189 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kgc2g"] Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.522884 4865 scope.go:117] "RemoveContainer" containerID="b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f" Feb 17 00:00:10 crc kubenswrapper[4865]: E0217 00:00:10.523725 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f\": container with ID starting with b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f not found: ID does not exist" containerID="b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.523780 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f"} err="failed to get container status \"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f\": rpc error: code = NotFound desc = could not find container \"b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f\": container with ID starting with b450ad72f7ac56e90e0011f208ac7d7919bfc3e9f8e567c5d4c64c767961ff7f not found: ID does not exist" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.523813 4865 scope.go:117] "RemoveContainer" containerID="0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554" Feb 17 00:00:10 crc kubenswrapper[4865]: E0217 00:00:10.524096 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554\": container with ID starting with 0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554 not found: ID does not exist" containerID="0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.524130 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554"} err="failed to get container status \"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554\": rpc error: code = NotFound desc = could not find container \"0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554\": container with ID starting with 0c5908bd37b7fdd0bfd03fbdc0ff91c9a9a1611876ba69f26ab8e88df72d0554 not found: ID does not exist" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.524145 4865 scope.go:117] "RemoveContainer" containerID="3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302" Feb 17 00:00:10 crc kubenswrapper[4865]: E0217 00:00:10.524455 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302\": container with ID starting with 3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302 not found: ID does not exist" containerID="3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302" Feb 17 00:00:10 crc kubenswrapper[4865]: I0217 00:00:10.524496 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302"} err="failed to get container status \"3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302\": rpc error: code = NotFound desc = could not find container \"3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302\": container with ID starting with 3f9d1c49a30e9e145992892d3c1bd6f388da1aedc96fac5f4d5307c7bb2aa302 not found: ID does not exist" Feb 17 00:00:12 crc kubenswrapper[4865]: I0217 00:00:12.431113 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930cf9b8-972c-4675-9840-ada5ccf6db38" path="/var/lib/kubelet/pods/930cf9b8-972c-4675-9840-ada5ccf6db38/volumes" Feb 17 00:00:15 crc kubenswrapper[4865]: I0217 00:00:15.416236 4865 scope.go:117] "RemoveContainer" containerID="efc18401507fd6fd01ab78a2b4837c63ed53141a5ed6afaafb1a6e94c1ae6147" Feb 17 00:00:15 crc kubenswrapper[4865]: E0217 00:00:15.417542 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7sl6f_openshift-machine-config-operator(af5ee041-5763-4a28-9d12-7ba21bbb9dbc)\"" pod="openshift-machine-config-operator/machine-config-daemon-7sl6f" podUID="af5ee041-5763-4a28-9d12-7ba21bbb9dbc"